Dec 13 14:25:06.965349 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Dec 13 12:55:10 -00 2024 Dec 13 14:25:06.965372 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:25:06.965383 kernel: BIOS-provided physical RAM map: Dec 13 14:25:06.965389 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 13 14:25:06.965395 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Dec 13 14:25:06.965400 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Dec 13 14:25:06.965407 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Dec 13 14:25:06.965413 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Dec 13 14:25:06.965419 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Dec 13 14:25:06.965426 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Dec 13 14:25:06.965455 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Dec 13 14:25:06.965463 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Dec 13 14:25:06.965471 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Dec 13 14:25:06.965477 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Dec 13 14:25:06.965484 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Dec 13 14:25:06.965493 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Dec 13 14:25:06.965499 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Dec 13 14:25:06.965505 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 14:25:06.965511 kernel: NX (Execute Disable) protection: active Dec 13 14:25:06.965517 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable Dec 13 14:25:06.965523 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable Dec 13 14:25:06.965533 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable Dec 13 14:25:06.965539 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable Dec 13 14:25:06.965545 kernel: extended physical RAM map: Dec 13 14:25:06.965551 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 13 14:25:06.965558 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Dec 13 14:25:06.965565 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Dec 13 14:25:06.965571 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Dec 13 14:25:06.965583 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Dec 13 14:25:06.965590 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable Dec 13 14:25:06.965596 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Dec 13 14:25:06.965632 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b438017] usable Dec 13 14:25:06.965638 kernel: reserve setup_data: [mem 0x000000009b438018-0x000000009b474e57] usable Dec 13 14:25:06.965644 kernel: reserve setup_data: [mem 0x000000009b474e58-0x000000009b475017] usable Dec 13 14:25:06.965650 kernel: reserve setup_data: [mem 0x000000009b475018-0x000000009b47ec57] usable Dec 13 14:25:06.965656 kernel: reserve setup_data: [mem 0x000000009b47ec58-0x000000009c8eefff] usable Dec 13 14:25:06.965665 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Dec 13 14:25:06.965671 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Dec 13 14:25:06.965677 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Dec 13 14:25:06.965683 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Dec 13 14:25:06.965692 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Dec 13 14:25:06.965699 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Dec 13 14:25:06.965705 kernel: reserve setup_data: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 14:25:06.965730 kernel: efi: EFI v2.70 by EDK II Dec 13 14:25:06.965737 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b673018 RNG=0x9cb73018 Dec 13 14:25:06.965743 kernel: random: crng init done Dec 13 14:25:06.965750 kernel: SMBIOS 2.8 present. Dec 13 14:25:06.965756 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Dec 13 14:25:06.965763 kernel: Hypervisor detected: KVM Dec 13 14:25:06.965769 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 14:25:06.965776 kernel: kvm-clock: cpu 0, msr 1119a001, primary cpu clock Dec 13 14:25:06.965782 kernel: kvm-clock: using sched offset of 5726713827 cycles Dec 13 14:25:06.965795 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 14:25:06.965802 kernel: tsc: Detected 2794.748 MHz processor Dec 13 14:25:06.965809 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 14:25:06.965816 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 14:25:06.965822 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Dec 13 14:25:06.965829 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 14:25:06.965837 kernel: Using GB pages for direct mapping Dec 13 14:25:06.965846 kernel: Secure boot disabled Dec 13 14:25:06.965855 kernel: ACPI: Early table checksum verification disabled Dec 13 14:25:06.965865 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Dec 13 14:25:06.965872 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Dec 13 14:25:06.965879 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:25:06.965886 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:25:06.965892 kernel: ACPI: FACS 0x000000009CBDD000 000040 Dec 13 14:25:06.965899 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:25:06.965906 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:25:06.965915 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:25:06.965922 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:25:06.965931 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Dec 13 14:25:06.965937 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Dec 13 14:25:06.965944 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Dec 13 14:25:06.965950 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Dec 13 14:25:06.965957 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Dec 13 14:25:06.965964 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Dec 13 14:25:06.965971 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Dec 13 14:25:06.965977 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Dec 13 14:25:06.965984 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Dec 13 14:25:06.965992 kernel: No NUMA configuration found Dec 13 14:25:06.966001 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Dec 13 14:25:06.966008 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Dec 13 14:25:06.966015 kernel: Zone ranges: Dec 13 14:25:06.966022 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 14:25:06.966031 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Dec 13 14:25:06.966040 kernel: Normal empty Dec 13 14:25:06.966048 kernel: Movable zone start for each node Dec 13 14:25:06.966057 kernel: Early memory node ranges Dec 13 14:25:06.966068 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 13 14:25:06.966075 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Dec 13 14:25:06.966082 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Dec 13 14:25:06.966089 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Dec 13 14:25:06.966095 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Dec 13 14:25:06.966102 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Dec 13 14:25:06.966108 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Dec 13 14:25:06.966115 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 14:25:06.966122 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 13 14:25:06.966128 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Dec 13 14:25:06.966137 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 14:25:06.966144 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Dec 13 14:25:06.966153 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Dec 13 14:25:06.966162 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Dec 13 14:25:06.966171 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 14:25:06.966179 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 14:25:06.966187 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 14:25:06.966195 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 14:25:06.966202 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 14:25:06.966210 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 14:25:06.966217 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 14:25:06.966224 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 14:25:06.966234 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 14:25:06.966240 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 14:25:06.966247 kernel: TSC deadline timer available Dec 13 14:25:06.966254 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Dec 13 14:25:06.966263 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 13 14:25:06.966270 kernel: kvm-guest: setup PV sched yield Dec 13 14:25:06.966278 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Dec 13 14:25:06.966285 kernel: Booting paravirtualized kernel on KVM Dec 13 14:25:06.966297 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 14:25:06.966305 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Dec 13 14:25:06.966312 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Dec 13 14:25:06.966319 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Dec 13 14:25:06.966326 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 13 14:25:06.966333 kernel: kvm-guest: setup async PF for cpu 0 Dec 13 14:25:06.966340 kernel: kvm-guest: stealtime: cpu 0, msr 9b21c0c0 Dec 13 14:25:06.966347 kernel: kvm-guest: PV spinlocks enabled Dec 13 14:25:06.966354 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 14:25:06.966361 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Dec 13 14:25:06.966370 kernel: Policy zone: DMA32 Dec 13 14:25:06.966378 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:25:06.966386 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 14:25:06.966393 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 14:25:06.966401 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 14:25:06.966408 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 14:25:06.966415 kernel: Memory: 2397432K/2567000K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47472K init, 4112K bss, 169308K reserved, 0K cma-reserved) Dec 13 14:25:06.966422 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 14:25:06.966429 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 14:25:06.966443 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 14:25:06.966451 kernel: rcu: Hierarchical RCU implementation. Dec 13 14:25:06.966459 kernel: rcu: RCU event tracing is enabled. Dec 13 14:25:06.966466 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 14:25:06.966475 kernel: Rude variant of Tasks RCU enabled. Dec 13 14:25:06.966482 kernel: Tracing variant of Tasks RCU enabled. Dec 13 14:25:06.966490 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 14:25:06.966497 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 14:25:06.966503 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 13 14:25:06.966510 kernel: Console: colour dummy device 80x25 Dec 13 14:25:06.966517 kernel: printk: console [ttyS0] enabled Dec 13 14:25:06.966524 kernel: ACPI: Core revision 20210730 Dec 13 14:25:06.966531 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 14:25:06.966540 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 14:25:06.966548 kernel: x2apic enabled Dec 13 14:25:06.966557 kernel: Switched APIC routing to physical x2apic. Dec 13 14:25:06.966567 kernel: kvm-guest: setup PV IPIs Dec 13 14:25:06.966576 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 14:25:06.966585 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 14:25:06.966593 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Dec 13 14:25:06.966600 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 14:25:06.966607 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 13 14:25:06.966616 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 13 14:25:06.966623 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 14:25:06.966632 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 14:25:06.966640 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 14:25:06.966647 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 14:25:06.966654 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 13 14:25:06.966661 kernel: RETBleed: Mitigation: untrained return thunk Dec 13 14:25:06.966671 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 14:25:06.966678 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Dec 13 14:25:06.966687 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 14:25:06.966694 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 14:25:06.966701 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 14:25:06.966708 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 14:25:06.966745 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 14:25:06.966755 kernel: Freeing SMP alternatives memory: 32K Dec 13 14:25:06.966761 kernel: pid_max: default: 32768 minimum: 301 Dec 13 14:25:06.966768 kernel: LSM: Security Framework initializing Dec 13 14:25:06.966775 kernel: SELinux: Initializing. Dec 13 14:25:06.966786 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 14:25:06.966793 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 14:25:06.966800 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 13 14:25:06.966808 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 13 14:25:06.966815 kernel: ... version: 0 Dec 13 14:25:06.966821 kernel: ... bit width: 48 Dec 13 14:25:06.966828 kernel: ... generic registers: 6 Dec 13 14:25:06.966835 kernel: ... value mask: 0000ffffffffffff Dec 13 14:25:06.966842 kernel: ... max period: 00007fffffffffff Dec 13 14:25:06.966850 kernel: ... fixed-purpose events: 0 Dec 13 14:25:06.966857 kernel: ... event mask: 000000000000003f Dec 13 14:25:06.966864 kernel: signal: max sigframe size: 1776 Dec 13 14:25:06.966871 kernel: rcu: Hierarchical SRCU implementation. Dec 13 14:25:06.966878 kernel: smp: Bringing up secondary CPUs ... Dec 13 14:25:06.966885 kernel: x86: Booting SMP configuration: Dec 13 14:25:06.966892 kernel: .... node #0, CPUs: #1 Dec 13 14:25:06.966901 kernel: kvm-clock: cpu 1, msr 1119a041, secondary cpu clock Dec 13 14:25:06.966911 kernel: kvm-guest: setup async PF for cpu 1 Dec 13 14:25:06.966922 kernel: kvm-guest: stealtime: cpu 1, msr 9b29c0c0 Dec 13 14:25:06.966929 kernel: #2 Dec 13 14:25:06.966936 kernel: kvm-clock: cpu 2, msr 1119a081, secondary cpu clock Dec 13 14:25:06.966943 kernel: kvm-guest: setup async PF for cpu 2 Dec 13 14:25:06.966950 kernel: kvm-guest: stealtime: cpu 2, msr 9b31c0c0 Dec 13 14:25:06.966957 kernel: #3 Dec 13 14:25:06.966964 kernel: kvm-clock: cpu 3, msr 1119a0c1, secondary cpu clock Dec 13 14:25:06.966971 kernel: kvm-guest: setup async PF for cpu 3 Dec 13 14:25:06.966978 kernel: kvm-guest: stealtime: cpu 3, msr 9b39c0c0 Dec 13 14:25:06.966987 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 14:25:06.966994 kernel: smpboot: Max logical packages: 1 Dec 13 14:25:06.967001 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Dec 13 14:25:06.967008 kernel: devtmpfs: initialized Dec 13 14:25:06.967018 kernel: x86/mm: Memory block size: 128MB Dec 13 14:25:06.967025 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Dec 13 14:25:06.967033 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Dec 13 14:25:06.967040 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Dec 13 14:25:06.967047 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Dec 13 14:25:06.967056 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Dec 13 14:25:06.967063 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 14:25:06.967070 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 14:25:06.967077 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 14:25:06.967084 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 14:25:06.967091 kernel: audit: initializing netlink subsys (disabled) Dec 13 14:25:06.967098 kernel: audit: type=2000 audit(1734099905.963:1): state=initialized audit_enabled=0 res=1 Dec 13 14:25:06.967105 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 14:25:06.967112 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 14:25:06.967121 kernel: cpuidle: using governor menu Dec 13 14:25:06.967128 kernel: ACPI: bus type PCI registered Dec 13 14:25:06.967135 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 14:25:06.967141 kernel: dca service started, version 1.12.1 Dec 13 14:25:06.967149 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 14:25:06.967157 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Dec 13 14:25:06.967167 kernel: PCI: Using configuration type 1 for base access Dec 13 14:25:06.967176 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 14:25:06.967184 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 14:25:06.967192 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 14:25:06.967199 kernel: ACPI: Added _OSI(Module Device) Dec 13 14:25:06.967208 kernel: ACPI: Added _OSI(Processor Device) Dec 13 14:25:06.967217 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 14:25:06.967226 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 14:25:06.967235 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 14:25:06.967243 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 14:25:06.967250 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 14:25:06.967257 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 14:25:06.967266 kernel: ACPI: Interpreter enabled Dec 13 14:25:06.967273 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 14:25:06.967280 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 14:25:06.967287 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 14:25:06.967294 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 14:25:06.967301 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 14:25:06.967462 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 14:25:06.967544 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 13 14:25:06.967639 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 13 14:25:06.967650 kernel: PCI host bridge to bus 0000:00 Dec 13 14:25:06.967757 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 14:25:06.967831 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 14:25:06.967899 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 14:25:06.967981 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Dec 13 14:25:06.968089 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 14:25:06.968283 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Dec 13 14:25:06.968376 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 14:25:06.968493 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 14:25:06.968590 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Dec 13 14:25:06.968681 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Dec 13 14:25:06.968788 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Dec 13 14:25:06.968869 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Dec 13 14:25:06.968945 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Dec 13 14:25:06.969031 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 14:25:06.969164 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 14:25:06.969278 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Dec 13 14:25:06.974273 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Dec 13 14:25:06.974427 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Dec 13 14:25:06.974584 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Dec 13 14:25:06.974700 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Dec 13 14:25:06.974828 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Dec 13 14:25:06.974942 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Dec 13 14:25:06.975930 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 14:25:06.976064 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Dec 13 14:25:06.976249 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Dec 13 14:25:06.976450 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Dec 13 14:25:06.976641 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Dec 13 14:25:06.976824 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 14:25:06.976976 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 14:25:06.977150 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 14:25:06.977313 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Dec 13 14:25:06.977488 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Dec 13 14:25:06.977743 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 14:25:06.977948 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Dec 13 14:25:06.977969 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 14:25:06.977996 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 14:25:06.978009 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 14:25:06.978025 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 14:25:06.978036 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 14:25:06.978050 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 14:25:06.978060 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 14:25:06.978087 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 14:25:06.978097 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 14:25:06.978114 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 14:25:06.978125 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 14:25:06.978135 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 14:25:06.978145 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 14:25:06.978171 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 14:25:06.978191 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 14:25:06.978203 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 14:25:06.978213 kernel: iommu: Default domain type: Translated Dec 13 14:25:06.978240 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 14:25:06.978419 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 14:25:06.978631 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 14:25:06.978767 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 14:25:06.978783 kernel: vgaarb: loaded Dec 13 14:25:06.978794 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 14:25:06.978809 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 14:25:06.978820 kernel: PTP clock support registered Dec 13 14:25:06.978830 kernel: Registered efivars operations Dec 13 14:25:06.978841 kernel: PCI: Using ACPI for IRQ routing Dec 13 14:25:06.978851 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 14:25:06.978862 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Dec 13 14:25:06.978872 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Dec 13 14:25:06.978882 kernel: e820: reserve RAM buffer [mem 0x9b438018-0x9bffffff] Dec 13 14:25:06.978892 kernel: e820: reserve RAM buffer [mem 0x9b475018-0x9bffffff] Dec 13 14:25:06.978904 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Dec 13 14:25:06.978914 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Dec 13 14:25:06.978924 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 14:25:06.978934 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 14:25:06.978944 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 14:25:06.978955 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 14:25:06.978966 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 14:25:06.978976 kernel: pnp: PnP ACPI init Dec 13 14:25:06.979100 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 14:25:06.979119 kernel: pnp: PnP ACPI: found 6 devices Dec 13 14:25:06.979130 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 14:25:06.979140 kernel: NET: Registered PF_INET protocol family Dec 13 14:25:06.979151 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 14:25:06.979161 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 14:25:06.979171 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 14:25:06.979181 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 14:25:06.979191 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Dec 13 14:25:06.979204 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 14:25:06.979214 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 14:25:06.979224 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 14:25:06.979234 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 14:25:06.979245 kernel: NET: Registered PF_XDP protocol family Dec 13 14:25:06.979351 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Dec 13 14:25:06.979467 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Dec 13 14:25:06.979569 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 14:25:06.979665 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 14:25:06.979773 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 14:25:06.979864 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Dec 13 14:25:06.979954 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 14:25:06.980042 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Dec 13 14:25:06.980056 kernel: PCI: CLS 0 bytes, default 64 Dec 13 14:25:06.980066 kernel: Initialise system trusted keyrings Dec 13 14:25:06.980076 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 14:25:06.980089 kernel: Key type asymmetric registered Dec 13 14:25:06.980099 kernel: Asymmetric key parser 'x509' registered Dec 13 14:25:06.980110 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 14:25:06.980135 kernel: io scheduler mq-deadline registered Dec 13 14:25:06.980147 kernel: io scheduler kyber registered Dec 13 14:25:06.980157 kernel: io scheduler bfq registered Dec 13 14:25:06.980167 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 14:25:06.980178 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 14:25:06.980188 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 14:25:06.980203 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 14:25:06.980214 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 14:25:06.980224 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 14:25:06.980235 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 14:25:06.980245 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 14:25:06.980255 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 14:25:06.980384 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 14:25:06.980401 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 14:25:06.980513 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 14:25:06.980609 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T14:25:06 UTC (1734099906) Dec 13 14:25:06.980700 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 13 14:25:06.980741 kernel: efifb: probing for efifb Dec 13 14:25:06.980752 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Dec 13 14:25:06.980762 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Dec 13 14:25:06.980772 kernel: efifb: scrolling: redraw Dec 13 14:25:06.980783 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 13 14:25:06.980793 kernel: Console: switching to colour frame buffer device 160x50 Dec 13 14:25:06.980806 kernel: fb0: EFI VGA frame buffer device Dec 13 14:25:06.980816 kernel: pstore: Registered efi as persistent store backend Dec 13 14:25:06.980827 kernel: NET: Registered PF_INET6 protocol family Dec 13 14:25:06.980839 kernel: Segment Routing with IPv6 Dec 13 14:25:06.980853 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 14:25:06.980863 kernel: NET: Registered PF_PACKET protocol family Dec 13 14:25:06.980875 kernel: Key type dns_resolver registered Dec 13 14:25:06.980885 kernel: IPI shorthand broadcast: enabled Dec 13 14:25:06.980895 kernel: sched_clock: Marking stable (553132590, 138165492)->(763144969, -71846887) Dec 13 14:25:06.980905 kernel: registered taskstats version 1 Dec 13 14:25:06.980916 kernel: Loading compiled-in X.509 certificates Dec 13 14:25:06.980927 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e1d88c9e01f5bb2adeb5b99325e46e5ca8dff115' Dec 13 14:25:06.980937 kernel: Key type .fscrypt registered Dec 13 14:25:06.980948 kernel: Key type fscrypt-provisioning registered Dec 13 14:25:06.980958 kernel: pstore: Using crash dump compression: deflate Dec 13 14:25:06.980970 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 14:25:06.980981 kernel: ima: Allocated hash algorithm: sha1 Dec 13 14:25:06.980991 kernel: ima: No architecture policies found Dec 13 14:25:06.981001 kernel: clk: Disabling unused clocks Dec 13 14:25:06.981011 kernel: Freeing unused kernel image (initmem) memory: 47472K Dec 13 14:25:06.981021 kernel: Write protecting the kernel read-only data: 28672k Dec 13 14:25:06.981033 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 14:25:06.981043 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 14:25:06.981054 kernel: Run /init as init process Dec 13 14:25:06.981067 kernel: with arguments: Dec 13 14:25:06.981076 kernel: /init Dec 13 14:25:06.981086 kernel: with environment: Dec 13 14:25:06.981096 kernel: HOME=/ Dec 13 14:25:06.981107 kernel: TERM=linux Dec 13 14:25:06.981117 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 14:25:06.981131 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:25:06.981146 systemd[1]: Detected virtualization kvm. Dec 13 14:25:06.981160 systemd[1]: Detected architecture x86-64. Dec 13 14:25:06.981171 systemd[1]: Running in initrd. Dec 13 14:25:06.981182 systemd[1]: No hostname configured, using default hostname. Dec 13 14:25:06.981192 systemd[1]: Hostname set to . Dec 13 14:25:06.981203 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:25:06.981214 systemd[1]: Queued start job for default target initrd.target. Dec 13 14:25:06.981226 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:25:06.981237 systemd[1]: Reached target cryptsetup.target. Dec 13 14:25:06.981250 systemd[1]: Reached target paths.target. Dec 13 14:25:06.981261 systemd[1]: Reached target slices.target. Dec 13 14:25:06.981272 systemd[1]: Reached target swap.target. Dec 13 14:25:06.981282 systemd[1]: Reached target timers.target. Dec 13 14:25:06.981294 systemd[1]: Listening on iscsid.socket. Dec 13 14:25:06.981304 systemd[1]: Listening on iscsiuio.socket. Dec 13 14:25:06.981315 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:25:06.981329 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:25:06.981340 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:25:06.981352 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:25:06.981365 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:25:06.981376 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:25:06.981387 systemd[1]: Reached target sockets.target. Dec 13 14:25:06.981398 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:25:06.981409 systemd[1]: Finished network-cleanup.service. Dec 13 14:25:06.981420 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 14:25:06.981443 systemd[1]: Starting systemd-journald.service... Dec 13 14:25:06.981455 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:25:06.981466 systemd[1]: Starting systemd-resolved.service... Dec 13 14:25:06.981478 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 14:25:06.981489 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:25:06.981499 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 14:25:06.981510 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:25:06.981521 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 14:25:06.981533 kernel: audit: type=1130 audit(1734099906.969:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:06.981546 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 14:25:06.981561 systemd-journald[197]: Journal started Dec 13 14:25:06.981623 systemd-journald[197]: Runtime Journal (/run/log/journal/9c361cd09f8c40629c29d52ff21d553d) is 6.0M, max 48.4M, 42.4M free. Dec 13 14:25:06.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:06.962469 systemd-modules-load[198]: Inserted module 'overlay' Dec 13 14:25:06.986899 systemd[1]: Started systemd-journald.service. Dec 13 14:25:06.992790 kernel: audit: type=1130 audit(1734099906.986:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:06.992823 kernel: audit: type=1130 audit(1734099906.987:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:06.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:06.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:06.987657 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:25:06.994735 systemd-resolved[199]: Positive Trust Anchors: Dec 13 14:25:06.994745 systemd-resolved[199]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:25:06.994780 systemd-resolved[199]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:25:06.997761 systemd-resolved[199]: Defaulting to hostname 'linux'. Dec 13 14:25:07.010024 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 14:25:07.010088 kernel: audit: type=1130 audit(1734099907.000:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:07.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:06.998849 systemd[1]: Started systemd-resolved.service. Dec 13 14:25:07.001810 systemd[1]: Reached target nss-lookup.target. Dec 13 14:25:07.020130 kernel: audit: type=1130 audit(1734099907.013:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:07.020162 kernel: Bridge firewalling registered Dec 13 14:25:07.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:07.013047 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 14:25:07.015586 systemd[1]: Starting dracut-cmdline.service... Dec 13 14:25:07.020047 systemd-modules-load[198]: Inserted module 'br_netfilter' Dec 13 14:25:07.038287 dracut-cmdline[215]: dracut-dracut-053 Dec 13 14:25:07.039973 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:25:07.046733 kernel: SCSI subsystem initialized Dec 13 14:25:07.062371 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 14:25:07.062397 kernel: device-mapper: uevent: version 1.0.3 Dec 13 14:25:07.063980 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 14:25:07.067808 systemd-modules-load[198]: Inserted module 'dm_multipath' Dec 13 14:25:07.069241 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:25:07.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:07.071946 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:25:07.074743 kernel: audit: type=1130 audit(1734099907.070:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:07.081100 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:25:07.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:07.085773 kernel: audit: type=1130 audit(1734099907.080:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:07.103751 kernel: Loading iSCSI transport class v2.0-870. Dec 13 14:25:07.122750 kernel: iscsi: registered transport (tcp) Dec 13 14:25:07.149859 kernel: iscsi: registered transport (qla4xxx) Dec 13 14:25:07.149944 kernel: QLogic iSCSI HBA Driver Dec 13 14:25:07.187832 systemd[1]: Finished dracut-cmdline.service. Dec 13 14:25:07.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:07.189225 systemd[1]: Starting dracut-pre-udev.service... Dec 13 14:25:07.193993 kernel: audit: type=1130 audit(1734099907.187:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:07.239744 kernel: raid6: avx2x4 gen() 29191 MB/s Dec 13 14:25:07.256734 kernel: raid6: avx2x4 xor() 5899 MB/s Dec 13 14:25:07.289732 kernel: raid6: avx2x2 gen() 24123 MB/s Dec 13 14:25:07.323750 kernel: raid6: avx2x2 xor() 15886 MB/s Dec 13 14:25:07.372743 kernel: raid6: avx2x1 gen() 17189 MB/s Dec 13 14:25:07.389735 kernel: raid6: avx2x1 xor() 10992 MB/s Dec 13 14:25:07.406750 kernel: raid6: sse2x4 gen() 10189 MB/s Dec 13 14:25:07.428742 kernel: raid6: sse2x4 xor() 6333 MB/s Dec 13 14:25:07.445737 kernel: raid6: sse2x2 gen() 10645 MB/s Dec 13 14:25:07.480747 kernel: raid6: sse2x2 xor() 9632 MB/s Dec 13 14:25:07.497736 kernel: raid6: sse2x1 gen() 8333 MB/s Dec 13 14:25:07.515153 kernel: raid6: sse2x1 xor() 5789 MB/s Dec 13 14:25:07.515188 kernel: raid6: using algorithm avx2x4 gen() 29191 MB/s Dec 13 14:25:07.515202 kernel: raid6: .... xor() 5899 MB/s, rmw enabled Dec 13 14:25:07.515963 kernel: raid6: using avx2x2 recovery algorithm Dec 13 14:25:07.528751 kernel: xor: automatically using best checksumming function avx Dec 13 14:25:07.632757 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 14:25:07.641996 systemd[1]: Finished dracut-pre-udev.service. Dec 13 14:25:07.646741 kernel: audit: type=1130 audit(1734099907.642:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:07.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:07.645000 audit: BPF prog-id=7 op=LOAD Dec 13 14:25:07.645000 audit: BPF prog-id=8 op=LOAD Dec 13 14:25:07.647152 systemd[1]: Starting systemd-udevd.service... Dec 13 14:25:07.663355 systemd-udevd[399]: Using default interface naming scheme 'v252'. Dec 13 14:25:07.668781 systemd[1]: Started systemd-udevd.service. Dec 13 14:25:07.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:07.687815 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 14:25:07.701846 dracut-pre-trigger[413]: rd.md=0: removing MD RAID activation Dec 13 14:25:07.731620 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 14:25:07.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:07.733235 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:25:07.773972 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:25:07.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:07.813547 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 14:25:07.851633 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 14:25:07.851650 kernel: libata version 3.00 loaded. Dec 13 14:25:07.851660 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 14:25:07.851670 kernel: GPT:9289727 != 19775487 Dec 13 14:25:07.851678 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 14:25:07.851694 kernel: GPT:9289727 != 19775487 Dec 13 14:25:07.851702 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 14:25:07.851724 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:25:07.851733 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 14:25:07.851743 kernel: AES CTR mode by8 optimization enabled Dec 13 14:25:07.851751 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 14:25:07.872858 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 14:25:07.872877 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 14:25:07.872988 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 14:25:07.873081 kernel: scsi host0: ahci Dec 13 14:25:07.873187 kernel: scsi host1: ahci Dec 13 14:25:07.873306 kernel: scsi host2: ahci Dec 13 14:25:07.873396 kernel: scsi host3: ahci Dec 13 14:25:07.873500 kernel: scsi host4: ahci Dec 13 14:25:07.873593 kernel: scsi host5: ahci Dec 13 14:25:07.873691 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Dec 13 14:25:07.873702 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Dec 13 14:25:07.873725 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Dec 13 14:25:07.873734 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Dec 13 14:25:07.873742 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Dec 13 14:25:07.873751 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Dec 13 14:25:07.877743 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (446) Dec 13 14:25:07.882618 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 14:25:07.885572 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 14:25:07.886035 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 14:25:07.891273 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:25:07.896950 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 14:25:07.898053 systemd[1]: Starting disk-uuid.service... Dec 13 14:25:07.905814 disk-uuid[542]: Primary Header is updated. Dec 13 14:25:07.905814 disk-uuid[542]: Secondary Entries is updated. Dec 13 14:25:07.905814 disk-uuid[542]: Secondary Header is updated. Dec 13 14:25:07.910736 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:25:07.921775 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:25:08.185057 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 14:25:08.185145 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 13 14:25:08.185156 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 14:25:08.187064 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 14:25:08.187735 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 14:25:08.188759 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 13 14:25:08.189740 kernel: ata3.00: applying bridge limits Dec 13 14:25:08.189763 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 14:25:08.190771 kernel: ata3.00: configured for UDMA/100 Dec 13 14:25:08.191742 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 14:25:08.228760 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 13 14:25:08.245441 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 14:25:08.245458 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 13 14:25:08.936539 disk-uuid[543]: The operation has completed successfully. Dec 13 14:25:08.938173 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:25:08.959491 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 14:25:08.959600 systemd[1]: Finished disk-uuid.service. Dec 13 14:25:08.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:08.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:08.969109 systemd[1]: Starting verity-setup.service... Dec 13 14:25:08.983742 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Dec 13 14:25:09.002966 systemd[1]: Found device dev-mapper-usr.device. Dec 13 14:25:09.004929 systemd[1]: Mounting sysusr-usr.mount... Dec 13 14:25:09.007358 systemd[1]: Finished verity-setup.service. Dec 13 14:25:09.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:09.069745 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 14:25:09.070005 systemd[1]: Mounted sysusr-usr.mount. Dec 13 14:25:09.070410 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 14:25:09.071517 systemd[1]: Starting ignition-setup.service... Dec 13 14:25:09.074060 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 14:25:09.085102 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:25:09.085134 kernel: BTRFS info (device vda6): using free space tree Dec 13 14:25:09.085144 kernel: BTRFS info (device vda6): has skinny extents Dec 13 14:25:09.094221 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 14:25:09.147522 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 14:25:09.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:09.149000 audit: BPF prog-id=9 op=LOAD Dec 13 14:25:09.150016 systemd[1]: Starting systemd-networkd.service... Dec 13 14:25:09.170877 systemd-networkd[718]: lo: Link UP Dec 13 14:25:09.170886 systemd-networkd[718]: lo: Gained carrier Dec 13 14:25:09.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:09.171313 systemd-networkd[718]: Enumeration completed Dec 13 14:25:09.171400 systemd[1]: Started systemd-networkd.service. Dec 13 14:25:09.171535 systemd-networkd[718]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:25:09.171976 systemd[1]: Reached target network.target. Dec 13 14:25:09.174122 systemd-networkd[718]: eth0: Link UP Dec 13 14:25:09.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:09.174125 systemd-networkd[718]: eth0: Gained carrier Dec 13 14:25:09.174749 systemd[1]: Starting iscsiuio.service... Dec 13 14:25:09.179506 systemd[1]: Started iscsiuio.service. Dec 13 14:25:09.181581 systemd[1]: Starting iscsid.service... Dec 13 14:25:09.185421 iscsid[723]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:25:09.185421 iscsid[723]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 14:25:09.185421 iscsid[723]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 14:25:09.185421 iscsid[723]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 14:25:09.185421 iscsid[723]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:25:09.185421 iscsid[723]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 14:25:09.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:09.186352 systemd[1]: Started iscsid.service. Dec 13 14:25:09.193107 systemd[1]: Starting dracut-initqueue.service... Dec 13 14:25:09.198907 systemd-networkd[718]: eth0: DHCPv4 address 10.0.0.92/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 14:25:09.209401 systemd[1]: Finished dracut-initqueue.service. Dec 13 14:25:09.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:09.211296 systemd[1]: Reached target remote-fs-pre.target. Dec 13 14:25:09.211973 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:25:09.212300 systemd[1]: Reached target remote-fs.target. Dec 13 14:25:09.216005 systemd[1]: Starting dracut-pre-mount.service... Dec 13 14:25:09.226401 systemd[1]: Finished dracut-pre-mount.service. Dec 13 14:25:09.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:09.227903 systemd[1]: Finished ignition-setup.service. Dec 13 14:25:09.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:09.229992 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 14:25:09.274789 ignition[738]: Ignition 2.14.0 Dec 13 14:25:09.274802 ignition[738]: Stage: fetch-offline Dec 13 14:25:09.274868 ignition[738]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:25:09.274886 ignition[738]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:25:09.275020 ignition[738]: parsed url from cmdline: "" Dec 13 14:25:09.275024 ignition[738]: no config URL provided Dec 13 14:25:09.275031 ignition[738]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:25:09.275040 ignition[738]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:25:09.275063 ignition[738]: op(1): [started] loading QEMU firmware config module Dec 13 14:25:09.275074 ignition[738]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 14:25:09.285318 ignition[738]: op(1): [finished] loading QEMU firmware config module Dec 13 14:25:09.286811 ignition[738]: parsing config with SHA512: 6ab9a0510207c9a4216fa6010046e0d1d65702eb30b97a478078f2c080d4b6e30bd82782e493ad7726db994b42f96412c0ba2cd4914af347a1f7eeb5f4674fdc Dec 13 14:25:09.292410 unknown[738]: fetched base config from "system" Dec 13 14:25:09.292433 unknown[738]: fetched user config from "qemu" Dec 13 14:25:09.292803 ignition[738]: fetch-offline: fetch-offline passed Dec 13 14:25:09.294641 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 14:25:09.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:09.292872 ignition[738]: Ignition finished successfully Dec 13 14:25:09.295258 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 14:25:09.297049 systemd[1]: Starting ignition-kargs.service... Dec 13 14:25:09.310938 ignition[746]: Ignition 2.14.0 Dec 13 14:25:09.310950 ignition[746]: Stage: kargs Dec 13 14:25:09.311047 ignition[746]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:25:09.311057 ignition[746]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:25:09.315170 ignition[746]: kargs: kargs passed Dec 13 14:25:09.315220 ignition[746]: Ignition finished successfully Dec 13 14:25:09.318084 systemd[1]: Finished ignition-kargs.service. Dec 13 14:25:09.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:09.319795 systemd[1]: Starting ignition-disks.service... Dec 13 14:25:09.328946 ignition[752]: Ignition 2.14.0 Dec 13 14:25:09.328961 ignition[752]: Stage: disks Dec 13 14:25:09.329096 ignition[752]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:25:09.329112 ignition[752]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:25:09.330293 ignition[752]: disks: disks passed Dec 13 14:25:09.330359 ignition[752]: Ignition finished successfully Dec 13 14:25:09.335266 systemd[1]: Finished ignition-disks.service. Dec 13 14:25:09.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:09.337129 systemd[1]: Reached target initrd-root-device.target. Dec 13 14:25:09.338343 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:25:09.340348 systemd[1]: Reached target local-fs.target. Dec 13 14:25:09.341254 systemd[1]: Reached target sysinit.target. Dec 13 14:25:09.342210 systemd[1]: Reached target basic.target. Dec 13 14:25:09.344973 systemd[1]: Starting systemd-fsck-root.service... Dec 13 14:25:09.355484 systemd-fsck[760]: ROOT: clean, 621/553520 files, 56021/553472 blocks Dec 13 14:25:09.362920 systemd[1]: Finished systemd-fsck-root.service. Dec 13 14:25:09.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:09.366699 systemd[1]: Mounting sysroot.mount... Dec 13 14:25:09.375757 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 14:25:09.376498 systemd[1]: Mounted sysroot.mount. Dec 13 14:25:09.377189 systemd[1]: Reached target initrd-root-fs.target. Dec 13 14:25:09.380119 systemd[1]: Mounting sysroot-usr.mount... Dec 13 14:25:09.380987 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 14:25:09.381035 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 14:25:09.381062 systemd[1]: Reached target ignition-diskful.target. Dec 13 14:25:09.383664 systemd[1]: Mounted sysroot-usr.mount. Dec 13 14:25:09.385427 systemd[1]: Starting initrd-setup-root.service... Dec 13 14:25:09.394628 initrd-setup-root[770]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 14:25:09.398446 initrd-setup-root[778]: cut: /sysroot/etc/group: No such file or directory Dec 13 14:25:09.403254 initrd-setup-root[786]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 14:25:09.407874 initrd-setup-root[794]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 14:25:09.447756 systemd[1]: Finished initrd-setup-root.service. Dec 13 14:25:09.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:09.449268 systemd[1]: Starting ignition-mount.service... Dec 13 14:25:09.450929 systemd[1]: Starting sysroot-boot.service... Dec 13 14:25:09.465244 bash[811]: umount: /sysroot/usr/share/oem: not mounted. Dec 13 14:25:09.471984 ignition[812]: INFO : Ignition 2.14.0 Dec 13 14:25:09.473120 ignition[812]: INFO : Stage: mount Dec 13 14:25:09.473120 ignition[812]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 14:25:09.473120 ignition[812]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:25:09.476120 ignition[812]: INFO : mount: mount passed Dec 13 14:25:09.476120 ignition[812]: INFO : Ignition finished successfully Dec 13 14:25:09.478158 systemd[1]: Finished ignition-mount.service. Dec 13 14:25:09.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:09.485868 systemd[1]: Finished sysroot-boot.service. Dec 13 14:25:09.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:10.017793 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:25:10.026591 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (821) Dec 13 14:25:10.026628 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:25:10.026638 kernel: BTRFS info (device vda6): using free space tree Dec 13 14:25:10.027408 kernel: BTRFS info (device vda6): has skinny extents Dec 13 14:25:10.031782 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:25:10.033530 systemd[1]: Starting ignition-files.service... Dec 13 14:25:10.048198 ignition[841]: INFO : Ignition 2.14.0 Dec 13 14:25:10.048198 ignition[841]: INFO : Stage: files Dec 13 14:25:10.050322 ignition[841]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 14:25:10.050322 ignition[841]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:25:10.050322 ignition[841]: DEBUG : files: compiled without relabeling support, skipping Dec 13 14:25:10.055059 ignition[841]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 14:25:10.055059 ignition[841]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 14:25:10.055059 ignition[841]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 14:25:10.055059 ignition[841]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 14:25:10.055059 ignition[841]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 14:25:10.055059 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 14:25:10.055059 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 14:25:10.055059 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 14:25:10.055059 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 14:25:10.055059 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:25:10.055059 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:25:10.055059 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:25:10.055059 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:25:10.055059 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:25:10.055059 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 14:25:10.053359 unknown[841]: wrote ssh authorized keys file for user: core Dec 13 14:25:10.418762 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Dec 13 14:25:10.766143 ignition[841]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:25:10.766143 ignition[841]: INFO : files: op(8): [started] processing unit "containerd.service" Dec 13 14:25:10.771471 ignition[841]: INFO : files: op(8): op(9): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 14:25:10.771471 ignition[841]: INFO : files: op(8): op(9): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 14:25:10.771471 ignition[841]: INFO : files: op(8): [finished] processing unit "containerd.service" Dec 13 14:25:10.771471 ignition[841]: INFO : files: op(a): [started] processing unit "coreos-metadata.service" Dec 13 14:25:10.771471 ignition[841]: INFO : files: op(a): op(b): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 14:25:10.771471 ignition[841]: INFO : files: op(a): op(b): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 14:25:10.771471 ignition[841]: INFO : files: op(a): [finished] processing unit "coreos-metadata.service" Dec 13 14:25:10.771471 ignition[841]: INFO : files: op(c): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 14:25:10.771471 ignition[841]: INFO : files: op(c): op(d): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 14:25:10.809884 systemd-networkd[718]: eth0: Gained IPv6LL Dec 13 14:25:10.811828 ignition[841]: INFO : files: op(c): op(d): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 14:25:10.813981 ignition[841]: INFO : files: op(c): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 14:25:10.813981 ignition[841]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:25:10.813981 ignition[841]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:25:10.813981 ignition[841]: INFO : files: files passed Dec 13 14:25:10.813981 ignition[841]: INFO : Ignition finished successfully Dec 13 14:25:10.823205 systemd[1]: Finished ignition-files.service. Dec 13 14:25:10.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:10.824220 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 14:25:10.825448 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 14:25:10.825976 systemd[1]: Starting ignition-quench.service... Dec 13 14:25:10.828459 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 14:25:10.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:10.832000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:10.828529 systemd[1]: Finished ignition-quench.service. Dec 13 14:25:10.836376 initrd-setup-root-after-ignition[866]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Dec 13 14:25:10.840609 initrd-setup-root-after-ignition[868]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:25:10.843249 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 14:25:10.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:10.843883 systemd[1]: Reached target ignition-complete.target. Dec 13 14:25:10.846977 systemd[1]: Starting initrd-parse-etc.service... Dec 13 14:25:10.863099 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 14:25:10.863200 systemd[1]: Finished initrd-parse-etc.service. Dec 13 14:25:10.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:10.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:10.865102 systemd[1]: Reached target initrd-fs.target. Dec 13 14:25:10.876509 systemd[1]: Reached target initrd.target. Dec 13 14:25:10.877009 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 14:25:10.877749 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 14:25:10.890232 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 14:25:10.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:10.892188 systemd[1]: Starting initrd-cleanup.service... Dec 13 14:25:10.902021 systemd[1]: Stopped target nss-lookup.target. Dec 13 14:25:10.902589 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 14:25:10.903108 systemd[1]: Stopped target timers.target. Dec 13 14:25:10.905660 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 14:25:10.905844 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 14:25:10.908000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:10.909373 systemd[1]: Stopped target initrd.target. Dec 13 14:25:10.910503 systemd[1]: Stopped target basic.target. Dec 13 14:25:10.912231 systemd[1]: Stopped target ignition-complete.target. Dec 13 14:25:10.913819 systemd[1]: Stopped target ignition-diskful.target. Dec 13 14:25:10.915650 systemd[1]: Stopped target initrd-root-device.target. Dec 13 14:25:10.917343 systemd[1]: Stopped target remote-fs.target. Dec 13 14:25:10.919096 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 14:25:10.920645 systemd[1]: Stopped target sysinit.target. Dec 13 14:25:10.923623 systemd[1]: Stopped target local-fs.target. Dec 13 14:25:10.923990 systemd[1]: Stopped target local-fs-pre.target. Dec 13 14:25:10.925731 systemd[1]: Stopped target swap.target. Dec 13 14:25:10.927592 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 14:25:10.927000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:10.927780 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 14:25:10.928295 systemd[1]: Stopped target cryptsetup.target. Dec 13 14:25:10.931000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:10.930368 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 14:25:10.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:10.930500 systemd[1]: Stopped dracut-initqueue.service. Dec 13 14:25:10.932417 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 14:25:10.932613 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 14:25:10.934294 systemd[1]: Stopped target paths.target. Dec 13 14:25:10.936327 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 14:25:10.941657 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 14:25:10.942236 systemd[1]: Stopped target slices.target. Dec 13 14:25:10.944119 systemd[1]: Stopped target sockets.target. Dec 13 14:25:10.945383 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 14:25:10.945480 systemd[1]: Closed iscsid.socket. Dec 13 14:25:10.947000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:10.946893 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 14:25:10.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:10.947008 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 14:25:10.948356 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 14:25:10.948486 systemd[1]: Stopped ignition-files.service. Dec 13 14:25:10.951364 systemd[1]: Stopping ignition-mount.service... Dec 13 14:25:10.954075 systemd[1]: Stopping iscsiuio.service... Dec 13 14:25:10.955586 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 14:25:10.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:10.956508 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 14:25:10.960598 ignition[881]: INFO : Ignition 2.14.0 Dec 13 14:25:10.960598 ignition[881]: INFO : Stage: umount Dec 13 14:25:10.960598 ignition[881]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 14:25:10.960598 ignition[881]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:25:10.960598 ignition[881]: INFO : umount: umount passed Dec 13 14:25:10.960598 ignition[881]: INFO : Ignition finished successfully Dec 13 14:25:10.966734 systemd[1]: Stopping sysroot-boot.service... Dec 13 14:25:10.968280 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 14:25:10.969524 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 14:25:10.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:10.971679 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 14:25:10.973202 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 14:25:10.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:10.977588 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 14:25:10.978590 systemd[1]: Stopped iscsiuio.service. Dec 13 14:25:10.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:10.980433 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 14:25:10.981453 systemd[1]: Stopped ignition-mount.service. Dec 13 14:25:10.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:10.984543 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 14:25:10.986041 systemd[1]: Stopped target network.target. Dec 13 14:25:10.987638 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 14:25:10.987671 systemd[1]: Closed iscsiuio.socket. Dec 13 14:25:10.989246 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 14:25:10.989296 systemd[1]: Stopped ignition-disks.service. Dec 13 14:25:10.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:10.992917 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 14:25:10.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:10.992959 systemd[1]: Stopped ignition-kargs.service. Dec 13 14:25:10.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:10.994762 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 14:25:10.994798 systemd[1]: Stopped ignition-setup.service. Dec 13 14:25:10.997548 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:25:11.000198 systemd[1]: Stopping systemd-resolved.service... Dec 13 14:25:11.002043 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 14:25:11.003066 systemd[1]: Finished initrd-cleanup.service. Dec 13 14:25:11.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:11.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:11.005770 systemd-networkd[718]: eth0: DHCPv6 lease lost Dec 13 14:25:11.007106 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:25:11.008481 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:25:11.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:11.010868 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 14:25:11.011000 audit: BPF prog-id=9 op=UNLOAD Dec 13 14:25:11.011929 systemd[1]: Stopped systemd-resolved.service. Dec 13 14:25:11.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:11.014324 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 14:25:11.014369 systemd[1]: Closed systemd-networkd.socket. Dec 13 14:25:11.015000 audit: BPF prog-id=6 op=UNLOAD Dec 13 14:25:11.017783 systemd[1]: Stopping network-cleanup.service... Dec 13 14:25:11.019503 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 14:25:11.021000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:11.019566 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 14:25:11.021798 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:25:11.021838 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:25:11.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:11.025232 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 14:25:11.025274 systemd[1]: Stopped systemd-modules-load.service. Dec 13 14:25:11.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:11.028639 systemd[1]: Stopping systemd-udevd.service... Dec 13 14:25:11.031401 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 14:25:11.036150 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 14:25:11.036281 systemd[1]: Stopped network-cleanup.service. Dec 13 14:25:11.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:11.040243 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 14:25:11.040402 systemd[1]: Stopped systemd-udevd.service. Dec 13 14:25:11.042000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:11.043835 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 14:25:11.043896 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 14:25:11.045958 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 14:25:11.045992 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 14:25:11.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:11.046566 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 14:25:11.046617 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 14:25:11.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:11.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:11.049387 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 14:25:11.049425 systemd[1]: Stopped dracut-cmdline.service. Dec 13 14:25:11.055000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:11.050992 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:25:11.051026 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 14:25:11.052419 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 14:25:11.053954 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:25:11.054002 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 14:25:11.062685 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 14:25:11.064026 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 14:25:11.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:11.065000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:11.073200 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 14:25:11.074224 systemd[1]: Stopped sysroot-boot.service. Dec 13 14:25:11.075000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:11.075956 systemd[1]: Reached target initrd-switch-root.target. Dec 13 14:25:11.077774 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 14:25:11.077816 systemd[1]: Stopped initrd-setup-root.service. Dec 13 14:25:11.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:11.081124 systemd[1]: Starting initrd-switch-root.service... Dec 13 14:25:11.086550 systemd[1]: Switching root. Dec 13 14:25:11.090000 audit: BPF prog-id=8 op=UNLOAD Dec 13 14:25:11.090000 audit: BPF prog-id=7 op=UNLOAD Dec 13 14:25:11.091000 audit: BPF prog-id=5 op=UNLOAD Dec 13 14:25:11.091000 audit: BPF prog-id=4 op=UNLOAD Dec 13 14:25:11.091000 audit: BPF prog-id=3 op=UNLOAD Dec 13 14:25:11.107878 iscsid[723]: iscsid shutting down. Dec 13 14:25:11.108775 systemd-journald[197]: Received SIGTERM from PID 1 (systemd). Dec 13 14:25:11.108807 systemd-journald[197]: Journal stopped Dec 13 14:25:13.816934 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 14:25:13.816991 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 14:25:13.817002 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 14:25:13.817012 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 14:25:13.817055 kernel: SELinux: policy capability open_perms=1 Dec 13 14:25:13.817068 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 14:25:13.817077 kernel: SELinux: policy capability always_check_network=0 Dec 13 14:25:13.817090 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 14:25:13.817099 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 14:25:13.817109 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 14:25:13.817119 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 14:25:13.817129 kernel: kauditd_printk_skb: 69 callbacks suppressed Dec 13 14:25:13.817142 kernel: audit: type=1403 audit(1734099911.202:80): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 14:25:13.817166 systemd[1]: Successfully loaded SELinux policy in 43.911ms. Dec 13 14:25:13.817186 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.207ms. Dec 13 14:25:13.817197 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:25:13.817208 systemd[1]: Detected virtualization kvm. Dec 13 14:25:13.817219 systemd[1]: Detected architecture x86-64. Dec 13 14:25:13.817228 systemd[1]: Detected first boot. Dec 13 14:25:13.817239 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:25:13.817249 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 14:25:13.817267 kernel: audit: type=1400 audit(1734099911.376:81): avc: denied { associate } for pid=932 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 14:25:13.817278 kernel: audit: type=1300 audit(1734099911.376:81): arch=c000003e syscall=188 success=yes exit=0 a0=c0001876c2 a1=c00002cb40 a2=c00002aa40 a3=32 items=0 ppid=915 pid=932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:13.817298 kernel: audit: type=1327 audit(1734099911.376:81): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:25:13.817308 kernel: audit: type=1400 audit(1734099911.378:82): avc: denied { associate } for pid=932 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 14:25:13.817319 kernel: audit: type=1300 audit(1734099911.378:82): arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c000187799 a2=1ed a3=0 items=2 ppid=915 pid=932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:13.817329 kernel: audit: type=1307 audit(1734099911.378:82): cwd="/" Dec 13 14:25:13.817347 kernel: audit: type=1302 audit(1734099911.378:82): item=0 name=(null) inode=2 dev=00:2a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:13.817357 kernel: audit: type=1302 audit(1734099911.378:82): item=1 name=(null) inode=3 dev=00:2a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:13.817368 kernel: audit: type=1327 audit(1734099911.378:82): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:25:13.817378 systemd[1]: Populated /etc with preset unit settings. Dec 13 14:25:13.817389 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:25:13.817400 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:25:13.817418 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:25:13.817429 systemd[1]: Queued start job for default target multi-user.target. Dec 13 14:25:13.817439 systemd[1]: Unnecessary job was removed for dev-vda6.device. Dec 13 14:25:13.817449 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 14:25:13.817460 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 14:25:13.817472 systemd[1]: Created slice system-getty.slice. Dec 13 14:25:13.817482 systemd[1]: Created slice system-modprobe.slice. Dec 13 14:25:13.817492 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 14:25:13.817510 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 14:25:13.817521 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 14:25:13.817531 systemd[1]: Created slice user.slice. Dec 13 14:25:13.817541 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:25:13.817552 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 14:25:13.817562 systemd[1]: Set up automount boot.automount. Dec 13 14:25:13.817573 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 14:25:13.817590 systemd[1]: Reached target integritysetup.target. Dec 13 14:25:13.817600 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:25:13.817611 systemd[1]: Reached target remote-fs.target. Dec 13 14:25:13.817621 systemd[1]: Reached target slices.target. Dec 13 14:25:13.817631 systemd[1]: Reached target swap.target. Dec 13 14:25:13.817642 systemd[1]: Reached target torcx.target. Dec 13 14:25:13.817652 systemd[1]: Reached target veritysetup.target. Dec 13 14:25:13.817669 systemd[1]: Listening on systemd-coredump.socket. Dec 13 14:25:13.817680 systemd[1]: Listening on systemd-initctl.socket. Dec 13 14:25:13.817690 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:25:13.817701 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:25:13.817730 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:25:13.817741 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:25:13.817751 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:25:13.817762 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:25:13.818805 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 14:25:13.818845 systemd[1]: Mounting dev-hugepages.mount... Dec 13 14:25:13.818980 systemd[1]: Mounting dev-mqueue.mount... Dec 13 14:25:13.819002 systemd[1]: Mounting media.mount... Dec 13 14:25:13.819016 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:25:13.819127 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 14:25:13.819149 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 14:25:13.819163 systemd[1]: Mounting tmp.mount... Dec 13 14:25:13.819256 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 14:25:13.819293 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:25:13.819322 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:25:13.819882 systemd[1]: Starting modprobe@configfs.service... Dec 13 14:25:13.819911 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:25:13.819927 systemd[1]: Starting modprobe@drm.service... Dec 13 14:25:13.819939 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:25:13.819951 systemd[1]: Starting modprobe@fuse.service... Dec 13 14:25:13.819962 systemd[1]: Starting modprobe@loop.service... Dec 13 14:25:13.819972 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 14:25:13.819983 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Dec 13 14:25:13.820008 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Dec 13 14:25:13.820018 systemd[1]: Starting systemd-journald.service... Dec 13 14:25:13.820029 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:25:13.820041 kernel: fuse: init (API version 7.34) Dec 13 14:25:13.820055 systemd[1]: Starting systemd-network-generator.service... Dec 13 14:25:13.820068 systemd[1]: Starting systemd-remount-fs.service... Dec 13 14:25:13.820079 kernel: loop: module loaded Dec 13 14:25:13.820089 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:25:13.820100 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:25:13.820120 systemd[1]: Mounted dev-hugepages.mount. Dec 13 14:25:13.820130 systemd[1]: Mounted dev-mqueue.mount. Dec 13 14:25:13.820141 systemd[1]: Mounted media.mount. Dec 13 14:25:13.820151 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 14:25:13.820161 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 14:25:13.820171 systemd[1]: Mounted tmp.mount. Dec 13 14:25:13.820183 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:25:13.820207 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 14:25:13.820222 systemd-journald[1024]: Journal started Dec 13 14:25:13.820277 systemd-journald[1024]: Runtime Journal (/run/log/journal/9c361cd09f8c40629c29d52ff21d553d) is 6.0M, max 48.4M, 42.4M free. Dec 13 14:25:13.704000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:25:13.704000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 14:25:13.807000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 14:25:13.807000 audit[1024]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffe0461d720 a2=4000 a3=7ffe0461d7bc items=0 ppid=1 pid=1024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:13.807000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 14:25:13.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:13.821206 systemd[1]: Finished modprobe@configfs.service. Dec 13 14:25:13.822570 systemd[1]: Started systemd-journald.service. Dec 13 14:25:13.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:13.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:13.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:13.825056 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:25:13.825259 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:25:13.826548 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:25:13.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:13.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:13.826809 systemd[1]: Finished modprobe@drm.service. Dec 13 14:25:13.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:13.828000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:13.829371 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:25:13.829556 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:25:13.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:13.830000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:13.830960 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 14:25:13.831136 systemd[1]: Finished modprobe@fuse.service. Dec 13 14:25:13.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:13.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:13.832460 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:25:13.832669 systemd[1]: Finished modprobe@loop.service. Dec 13 14:25:13.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:13.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:13.834100 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:25:13.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:13.835670 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 14:25:13.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:13.837317 systemd[1]: Finished systemd-network-generator.service. Dec 13 14:25:13.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:13.838920 systemd[1]: Finished systemd-remount-fs.service. Dec 13 14:25:13.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:13.840620 systemd[1]: Reached target network-pre.target. Dec 13 14:25:13.842969 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 14:25:13.844973 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 14:25:13.845851 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 14:25:13.847864 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 14:25:13.851991 systemd[1]: Starting systemd-journal-flush.service... Dec 13 14:25:13.852987 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:25:13.854128 systemd[1]: Starting systemd-random-seed.service... Dec 13 14:25:13.855134 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:25:13.856314 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:25:13.861774 systemd-journald[1024]: Time spent on flushing to /var/log/journal/9c361cd09f8c40629c29d52ff21d553d is 17.465ms for 1080 entries. Dec 13 14:25:13.861774 systemd-journald[1024]: System Journal (/var/log/journal/9c361cd09f8c40629c29d52ff21d553d) is 8.0M, max 195.6M, 187.6M free. Dec 13 14:25:13.896016 systemd-journald[1024]: Received client request to flush runtime journal. Dec 13 14:25:13.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:13.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:13.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:13.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:13.858164 systemd[1]: Starting systemd-sysusers.service... Dec 13 14:25:13.863623 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:25:13.867634 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 14:25:13.869245 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 14:25:13.898239 udevadm[1067]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 14:25:13.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:13.872555 systemd[1]: Starting systemd-udev-settle.service... Dec 13 14:25:13.874924 systemd[1]: Finished systemd-random-seed.service. Dec 13 14:25:13.876515 systemd[1]: Reached target first-boot-complete.target. Dec 13 14:25:13.888851 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:25:13.890322 systemd[1]: Finished systemd-sysusers.service. Dec 13 14:25:13.893490 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:25:13.897102 systemd[1]: Finished systemd-journal-flush.service. Dec 13 14:25:13.922479 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:25:13.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:14.423781 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 14:25:14.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:14.426293 systemd[1]: Starting systemd-udevd.service... Dec 13 14:25:14.447483 systemd-udevd[1077]: Using default interface naming scheme 'v252'. Dec 13 14:25:14.463447 systemd[1]: Started systemd-udevd.service. Dec 13 14:25:14.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:14.467027 systemd[1]: Starting systemd-networkd.service... Dec 13 14:25:14.477971 systemd[1]: Starting systemd-userdbd.service... Dec 13 14:25:14.490202 systemd[1]: Found device dev-ttyS0.device. Dec 13 14:25:14.530999 systemd[1]: Started systemd-userdbd.service. Dec 13 14:25:14.532739 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 14:25:14.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:14.538748 kernel: ACPI: button: Power Button [PWRF] Dec 13 14:25:14.539940 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:25:14.573000 audit[1092]: AVC avc: denied { confidentiality } for pid=1092 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 14:25:14.602561 systemd-networkd[1087]: lo: Link UP Dec 13 14:25:14.602576 systemd-networkd[1087]: lo: Gained carrier Dec 13 14:25:14.602994 systemd-networkd[1087]: Enumeration completed Dec 13 14:25:14.603151 systemd-networkd[1087]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:25:14.603159 systemd[1]: Started systemd-networkd.service. Dec 13 14:25:14.604434 systemd-networkd[1087]: eth0: Link UP Dec 13 14:25:14.604443 systemd-networkd[1087]: eth0: Gained carrier Dec 13 14:25:14.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:14.573000 audit[1092]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=558b8e00fce0 a1=337fc a2=7ff5ac684bc5 a3=5 items=110 ppid=1077 pid=1092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:14.573000 audit: CWD cwd="/" Dec 13 14:25:14.573000 audit: PATH item=0 name=(null) inode=51 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=1 name=(null) inode=15090 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=2 name=(null) inode=15090 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=3 name=(null) inode=15091 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=4 name=(null) inode=15090 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=5 name=(null) inode=15092 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=6 name=(null) inode=15090 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=7 name=(null) inode=15093 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=8 name=(null) inode=15093 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=9 name=(null) inode=15094 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=10 name=(null) inode=15093 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=11 name=(null) inode=15095 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=12 name=(null) inode=15093 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=13 name=(null) inode=15096 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=14 name=(null) inode=15093 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=15 name=(null) inode=15097 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=16 name=(null) inode=15093 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=17 name=(null) inode=15098 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=18 name=(null) inode=15090 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=19 name=(null) inode=15099 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=20 name=(null) inode=15099 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=21 name=(null) inode=15100 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=22 name=(null) inode=15099 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=23 name=(null) inode=15101 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=24 name=(null) inode=15099 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=25 name=(null) inode=15102 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=26 name=(null) inode=15099 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=27 name=(null) inode=15103 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=28 name=(null) inode=15099 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=29 name=(null) inode=15104 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=30 name=(null) inode=15090 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=31 name=(null) inode=15105 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=32 name=(null) inode=15105 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=33 name=(null) inode=15106 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=34 name=(null) inode=15105 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=35 name=(null) inode=15107 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=36 name=(null) inode=15105 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=37 name=(null) inode=15108 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=38 name=(null) inode=15105 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=39 name=(null) inode=15109 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=40 name=(null) inode=15105 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=41 name=(null) inode=15110 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=42 name=(null) inode=15090 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=43 name=(null) inode=15111 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=44 name=(null) inode=15111 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=45 name=(null) inode=15112 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=46 name=(null) inode=15111 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=47 name=(null) inode=15113 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=48 name=(null) inode=15111 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=49 name=(null) inode=15114 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=50 name=(null) inode=15111 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=51 name=(null) inode=15115 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=52 name=(null) inode=15111 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=53 name=(null) inode=15116 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=54 name=(null) inode=51 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=55 name=(null) inode=15117 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=56 name=(null) inode=15117 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=57 name=(null) inode=15118 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=58 name=(null) inode=15117 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=59 name=(null) inode=15119 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=60 name=(null) inode=15117 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=61 name=(null) inode=15120 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=62 name=(null) inode=15120 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=63 name=(null) inode=15121 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=64 name=(null) inode=15120 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=65 name=(null) inode=15122 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=66 name=(null) inode=15120 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=67 name=(null) inode=15123 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=68 name=(null) inode=15120 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=69 name=(null) inode=15124 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=70 name=(null) inode=15120 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=71 name=(null) inode=15125 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=72 name=(null) inode=15117 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=73 name=(null) inode=15126 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=74 name=(null) inode=15126 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=75 name=(null) inode=15127 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=76 name=(null) inode=15126 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=77 name=(null) inode=15128 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=78 name=(null) inode=15126 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=79 name=(null) inode=15129 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=80 name=(null) inode=15126 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=81 name=(null) inode=15130 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.616860 systemd-networkd[1087]: eth0: DHCPv4 address 10.0.0.92/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 14:25:14.573000 audit: PATH item=82 name=(null) inode=15126 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=83 name=(null) inode=15131 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=84 name=(null) inode=15117 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=85 name=(null) inode=15132 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=86 name=(null) inode=15132 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=87 name=(null) inode=15133 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=88 name=(null) inode=15132 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=89 name=(null) inode=15134 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=90 name=(null) inode=15132 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=91 name=(null) inode=15135 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=92 name=(null) inode=15132 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=93 name=(null) inode=15136 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=94 name=(null) inode=15132 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=95 name=(null) inode=15137 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=96 name=(null) inode=15117 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=97 name=(null) inode=15138 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=98 name=(null) inode=15138 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=99 name=(null) inode=15139 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=100 name=(null) inode=15138 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=101 name=(null) inode=15140 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=102 name=(null) inode=15138 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=103 name=(null) inode=15141 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=104 name=(null) inode=15138 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=105 name=(null) inode=15142 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=106 name=(null) inode=15138 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=107 name=(null) inode=15143 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PATH item=109 name=(null) inode=15150 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:25:14.573000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 14:25:14.637880 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Dec 13 14:25:14.640248 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 14:25:14.640377 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 14:25:14.640490 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 14:25:14.657739 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 14:25:14.664753 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 14:25:14.664830 kernel: kvm: Nested Virtualization enabled Dec 13 14:25:14.664847 kernel: SVM: kvm: Nested Paging enabled Dec 13 14:25:14.666109 kernel: SVM: Virtual VMLOAD VMSAVE supported Dec 13 14:25:14.666153 kernel: SVM: Virtual GIF supported Dec 13 14:25:14.696754 kernel: EDAC MC: Ver: 3.0.0 Dec 13 14:25:14.725282 systemd[1]: Finished systemd-udev-settle.service. Dec 13 14:25:14.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:14.727960 systemd[1]: Starting lvm2-activation-early.service... Dec 13 14:25:14.735275 lvm[1114]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:25:14.764886 systemd[1]: Finished lvm2-activation-early.service. Dec 13 14:25:14.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:14.766035 systemd[1]: Reached target cryptsetup.target. Dec 13 14:25:14.768244 systemd[1]: Starting lvm2-activation.service... Dec 13 14:25:14.771367 lvm[1116]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:25:14.799910 systemd[1]: Finished lvm2-activation.service. Dec 13 14:25:14.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:14.801014 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:25:14.801992 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 14:25:14.802018 systemd[1]: Reached target local-fs.target. Dec 13 14:25:14.802886 systemd[1]: Reached target machines.target. Dec 13 14:25:14.805209 systemd[1]: Starting ldconfig.service... Dec 13 14:25:14.806398 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:25:14.806444 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:25:14.807579 systemd[1]: Starting systemd-boot-update.service... Dec 13 14:25:14.809634 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 14:25:14.812425 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 14:25:14.814508 systemd[1]: Starting systemd-sysext.service... Dec 13 14:25:14.815910 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1119 (bootctl) Dec 13 14:25:14.817173 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 14:25:14.818888 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 14:25:14.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:14.828627 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 14:25:14.832367 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 14:25:14.832615 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 14:25:14.846747 kernel: loop0: detected capacity change from 0 to 211296 Dec 13 14:25:14.854569 systemd-fsck[1127]: fsck.fat 4.2 (2021-01-31) Dec 13 14:25:14.854569 systemd-fsck[1127]: /dev/vda1: 790 files, 119311/258078 clusters Dec 13 14:25:14.857110 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 14:25:14.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:14.860129 systemd[1]: Mounting boot.mount... Dec 13 14:25:14.892772 systemd[1]: Mounted boot.mount. Dec 13 14:25:15.048641 systemd[1]: Finished systemd-boot-update.service. Dec 13 14:25:15.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:15.054220 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 14:25:15.055161 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 14:25:15.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:15.059731 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 14:25:15.074739 kernel: loop1: detected capacity change from 0 to 211296 Dec 13 14:25:15.079964 (sd-sysext)[1140]: Using extensions 'kubernetes'. Dec 13 14:25:15.080324 (sd-sysext)[1140]: Merged extensions into '/usr'. Dec 13 14:25:15.098405 ldconfig[1118]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 14:25:15.099432 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:25:15.101417 systemd[1]: Mounting usr-share-oem.mount... Dec 13 14:25:15.102603 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:25:15.104229 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:25:15.107107 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:25:15.110243 systemd[1]: Starting modprobe@loop.service... Dec 13 14:25:15.112106 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:25:15.112495 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:25:15.112885 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:25:15.117331 systemd[1]: Finished ldconfig.service. Dec 13 14:25:15.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:15.118466 systemd[1]: Mounted usr-share-oem.mount. Dec 13 14:25:15.119660 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:25:15.119866 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:25:15.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:15.120000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:15.121206 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:25:15.121393 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:25:15.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:15.121000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:15.122747 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:25:15.122940 systemd[1]: Finished modprobe@loop.service. Dec 13 14:25:15.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:15.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:15.124372 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:25:15.124484 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:25:15.125563 systemd[1]: Finished systemd-sysext.service. Dec 13 14:25:15.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:15.129332 systemd[1]: Starting ensure-sysext.service... Dec 13 14:25:15.131783 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 14:25:15.137341 systemd[1]: Reloading. Dec 13 14:25:15.145573 systemd-tmpfiles[1155]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 14:25:15.146461 systemd-tmpfiles[1155]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 14:25:15.148685 systemd-tmpfiles[1155]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 14:25:15.281371 /usr/lib/systemd/system-generators/torcx-generator[1174]: time="2024-12-13T14:25:15Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:25:15.281408 /usr/lib/systemd/system-generators/torcx-generator[1174]: time="2024-12-13T14:25:15Z" level=info msg="torcx already run" Dec 13 14:25:15.372700 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:25:15.372746 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:25:15.398019 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:25:15.476550 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 14:25:15.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:15.481229 systemd[1]: Starting audit-rules.service... Dec 13 14:25:15.483633 systemd[1]: Starting clean-ca-certificates.service... Dec 13 14:25:15.486472 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 14:25:15.489222 systemd[1]: Starting systemd-resolved.service... Dec 13 14:25:15.492070 systemd[1]: Starting systemd-timesyncd.service... Dec 13 14:25:15.494423 systemd[1]: Starting systemd-update-utmp.service... Dec 13 14:25:15.496368 systemd[1]: Finished clean-ca-certificates.service. Dec 13 14:25:15.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:15.498000 audit[1235]: SYSTEM_BOOT pid=1235 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 14:25:15.502393 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:25:15.504056 systemd[1]: Finished systemd-update-utmp.service. Dec 13 14:25:15.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:15.508390 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 14:25:15.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:15.511236 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:25:15.512815 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:25:15.515119 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:25:15.517164 systemd[1]: Starting modprobe@loop.service... Dec 13 14:25:15.518453 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:25:15.518635 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:25:15.520859 systemd[1]: Starting systemd-update-done.service... Dec 13 14:25:15.521992 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:25:15.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:15.524000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:15.523547 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:25:15.523883 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:25:15.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:15.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:15.525437 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:25:15.525623 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:25:15.527164 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:25:15.527585 systemd[1]: Finished modprobe@loop.service. Dec 13 14:25:15.528000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:15.528000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:25:15.530239 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:25:15.530431 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:25:15.533534 augenrules[1255]: No rules Dec 13 14:25:15.532000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 14:25:15.532000 audit[1255]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd786ea830 a2=420 a3=0 items=0 ppid=1224 pid=1255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:25:15.532000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 14:25:15.538075 systemd[1]: Finished audit-rules.service. Dec 13 14:25:15.539745 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:25:15.543469 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:25:15.546184 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:25:15.549695 systemd[1]: Starting modprobe@loop.service... Dec 13 14:25:15.552075 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:25:15.552412 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:25:15.552636 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:25:15.554356 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:25:15.554605 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:25:15.556535 systemd[1]: Finished systemd-update-done.service. Dec 13 14:25:15.558125 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:25:15.558363 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:25:15.560124 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:25:15.564617 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:25:15.567220 systemd[1]: Finished modprobe@loop.service. Dec 13 14:25:15.569005 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:25:15.571063 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:25:15.573766 systemd[1]: Starting modprobe@drm.service... Dec 13 14:25:15.576313 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:25:15.578938 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:25:15.579085 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:25:15.581023 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:25:15.583986 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:25:15.586436 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:25:15.586640 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:25:15.588113 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:25:15.588271 systemd[1]: Finished modprobe@drm.service. Dec 13 14:25:15.589705 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:25:15.589911 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:25:15.591859 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:25:15.592018 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:25:15.594171 systemd[1]: Finished ensure-sysext.service. Dec 13 14:25:15.608647 systemd-resolved[1230]: Positive Trust Anchors: Dec 13 14:25:15.608664 systemd-resolved[1230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:25:15.608696 systemd-resolved[1230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:25:15.608967 systemd[1]: Started systemd-timesyncd.service. Dec 13 14:25:15.610353 systemd[1]: Reached target time-set.target. Dec 13 14:25:15.611010 systemd-timesyncd[1234]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 14:25:15.611065 systemd-timesyncd[1234]: Initial clock synchronization to Fri 2024-12-13 14:25:15.668728 UTC. Dec 13 14:25:15.618350 systemd-resolved[1230]: Defaulting to hostname 'linux'. Dec 13 14:25:15.620041 systemd[1]: Started systemd-resolved.service. Dec 13 14:25:15.621057 systemd[1]: Reached target network.target. Dec 13 14:25:15.621891 systemd[1]: Reached target nss-lookup.target. Dec 13 14:25:15.622781 systemd[1]: Reached target sysinit.target. Dec 13 14:25:15.623647 systemd[1]: Started motdgen.path. Dec 13 14:25:15.624394 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 14:25:15.625736 systemd[1]: Started logrotate.timer. Dec 13 14:25:15.626661 systemd[1]: Started mdadm.timer. Dec 13 14:25:15.627474 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 14:25:15.628425 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 14:25:15.628460 systemd[1]: Reached target paths.target. Dec 13 14:25:15.629292 systemd[1]: Reached target timers.target. Dec 13 14:25:15.630517 systemd[1]: Listening on dbus.socket. Dec 13 14:25:15.632867 systemd[1]: Starting docker.socket... Dec 13 14:25:15.634951 systemd[1]: Listening on sshd.socket. Dec 13 14:25:15.635831 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:25:15.636148 systemd[1]: Listening on docker.socket. Dec 13 14:25:15.636938 systemd[1]: Reached target sockets.target. Dec 13 14:25:15.637886 systemd[1]: Reached target basic.target. Dec 13 14:25:15.638885 systemd[1]: System is tainted: cgroupsv1 Dec 13 14:25:15.638956 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:25:15.638988 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:25:15.640044 systemd[1]: Starting containerd.service... Dec 13 14:25:15.641831 systemd[1]: Starting dbus.service... Dec 13 14:25:15.643574 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 14:25:15.645839 systemd[1]: Starting extend-filesystems.service... Dec 13 14:25:15.647816 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 14:25:15.648976 systemd[1]: Starting motdgen.service... Dec 13 14:25:15.650765 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 14:25:15.652693 jq[1285]: false Dec 13 14:25:15.653740 systemd[1]: Starting sshd-keygen.service... Dec 13 14:25:15.656517 systemd[1]: Starting systemd-logind.service... Dec 13 14:25:15.657314 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:25:15.657381 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 14:25:15.658457 systemd[1]: Starting update-engine.service... Dec 13 14:25:15.660515 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 14:25:15.663431 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 14:25:15.663734 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 14:25:15.697103 jq[1302]: true Dec 13 14:25:15.700539 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 14:25:15.718890 extend-filesystems[1286]: Found loop1 Dec 13 14:25:15.718890 extend-filesystems[1286]: Found sr0 Dec 13 14:25:15.718890 extend-filesystems[1286]: Found vda Dec 13 14:25:15.718890 extend-filesystems[1286]: Found vda1 Dec 13 14:25:15.718890 extend-filesystems[1286]: Found vda2 Dec 13 14:25:15.718890 extend-filesystems[1286]: Found vda3 Dec 13 14:25:15.718890 extend-filesystems[1286]: Found usr Dec 13 14:25:15.718890 extend-filesystems[1286]: Found vda4 Dec 13 14:25:15.718890 extend-filesystems[1286]: Found vda6 Dec 13 14:25:15.718890 extend-filesystems[1286]: Found vda7 Dec 13 14:25:15.718890 extend-filesystems[1286]: Found vda9 Dec 13 14:25:15.718890 extend-filesystems[1286]: Checking size of /dev/vda9 Dec 13 14:25:15.762954 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 14:25:15.701497 dbus-daemon[1284]: [system] SELinux support is enabled Dec 13 14:25:15.700901 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 14:25:15.763515 update_engine[1298]: I1213 14:25:15.745851 1298 main.cc:92] Flatcar Update Engine starting Dec 13 14:25:15.763515 update_engine[1298]: I1213 14:25:15.749509 1298 update_check_scheduler.cc:74] Next update check in 11m32s Dec 13 14:25:15.767561 extend-filesystems[1286]: Resized partition /dev/vda9 Dec 13 14:25:15.774327 jq[1309]: true Dec 13 14:25:15.702370 systemd[1]: Started dbus.service. Dec 13 14:25:15.776634 extend-filesystems[1323]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 14:25:15.706928 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 14:25:15.706979 systemd[1]: Reached target system-config.target. Dec 13 14:25:15.720030 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 14:25:15.720050 systemd[1]: Reached target user-config.target. Dec 13 14:25:15.720955 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 14:25:15.721229 systemd[1]: Finished motdgen.service. Dec 13 14:25:15.749425 systemd[1]: Started update-engine.service. Dec 13 14:25:15.766557 systemd[1]: Started locksmithd.service. Dec 13 14:25:15.797277 systemd-logind[1295]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 14:25:15.797313 systemd-logind[1295]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 14:25:15.797759 systemd-logind[1295]: New seat seat0. Dec 13 14:25:15.800550 env[1315]: time="2024-12-13T14:25:15.800483847Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 14:25:15.808885 systemd[1]: Started systemd-logind.service. Dec 13 14:25:15.836739 env[1315]: time="2024-12-13T14:25:15.832278383Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 14:25:15.836739 env[1315]: time="2024-12-13T14:25:15.832453792Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:25:15.836739 env[1315]: time="2024-12-13T14:25:15.834184969Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:25:15.836739 env[1315]: time="2024-12-13T14:25:15.834207421Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:25:15.836739 env[1315]: time="2024-12-13T14:25:15.834464853Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:25:15.836739 env[1315]: time="2024-12-13T14:25:15.834480032Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 14:25:15.836739 env[1315]: time="2024-12-13T14:25:15.834491253Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 14:25:15.836739 env[1315]: time="2024-12-13T14:25:15.834499318Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 14:25:15.836739 env[1315]: time="2024-12-13T14:25:15.834563498Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:25:15.836739 env[1315]: time="2024-12-13T14:25:15.834789081Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:25:15.833320 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:25:15.837213 env[1315]: time="2024-12-13T14:25:15.834930567Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:25:15.837213 env[1315]: time="2024-12-13T14:25:15.834943511Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 14:25:15.837213 env[1315]: time="2024-12-13T14:25:15.834990189Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 14:25:15.837213 env[1315]: time="2024-12-13T14:25:15.835003073Z" level=info msg="metadata content store policy set" policy=shared Dec 13 14:25:15.833393 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:25:15.856739 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 14:25:15.969073 locksmithd[1335]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 14:25:16.293294 extend-filesystems[1323]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 14:25:16.293294 extend-filesystems[1323]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 14:25:16.293294 extend-filesystems[1323]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 14:25:16.298862 extend-filesystems[1286]: Resized filesystem in /dev/vda9 Dec 13 14:25:16.294075 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 14:25:16.294366 systemd[1]: Finished extend-filesystems.service. Dec 13 14:25:16.312879 systemd-networkd[1087]: eth0: Gained IPv6LL Dec 13 14:25:16.314594 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:25:16.322060 systemd[1]: Reached target network-online.target. Dec 13 14:25:16.328218 systemd[1]: Starting kubelet.service... Dec 13 14:25:16.414472 bash[1340]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:25:16.414903 env[1315]: time="2024-12-13T14:25:16.414185085Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 14:25:16.414903 env[1315]: time="2024-12-13T14:25:16.414271416Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 14:25:16.414903 env[1315]: time="2024-12-13T14:25:16.414299984Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 14:25:16.414903 env[1315]: time="2024-12-13T14:25:16.414355828Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 14:25:16.414903 env[1315]: time="2024-12-13T14:25:16.414402187Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 14:25:16.414903 env[1315]: time="2024-12-13T14:25:16.414430866Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 14:25:16.414903 env[1315]: time="2024-12-13T14:25:16.414523735Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 14:25:16.414903 env[1315]: time="2024-12-13T14:25:16.414552605Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 14:25:16.414903 env[1315]: time="2024-12-13T14:25:16.414569680Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 14:25:16.414903 env[1315]: time="2024-12-13T14:25:16.414586008Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 14:25:16.414903 env[1315]: time="2024-12-13T14:25:16.414606835Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 14:25:16.414903 env[1315]: time="2024-12-13T14:25:16.414620358Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 14:25:16.414903 env[1315]: time="2024-12-13T14:25:16.414806610Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 14:25:16.414903 env[1315]: time="2024-12-13T14:25:16.414900559Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 14:25:16.415297 env[1315]: time="2024-12-13T14:25:16.415261833Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 14:25:16.415297 env[1315]: time="2024-12-13T14:25:16.415294609Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 14:25:16.415395 env[1315]: time="2024-12-13T14:25:16.415315135Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 14:25:16.415395 env[1315]: time="2024-12-13T14:25:16.415383401Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 14:25:16.415443 env[1315]: time="2024-12-13T14:25:16.415396460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 14:25:16.415443 env[1315]: time="2024-12-13T14:25:16.415409276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 14:25:16.415443 env[1315]: time="2024-12-13T14:25:16.415421001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 14:25:16.415443 env[1315]: time="2024-12-13T14:25:16.415438247Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 14:25:16.415527 env[1315]: time="2024-12-13T14:25:16.415454575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 14:25:16.415527 env[1315]: time="2024-12-13T14:25:16.415468137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 14:25:16.415527 env[1315]: time="2024-12-13T14:25:16.415478975Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 14:25:16.415527 env[1315]: time="2024-12-13T14:25:16.415491468Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 14:25:16.415650 env[1315]: time="2024-12-13T14:25:16.415610342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 14:25:16.415650 env[1315]: time="2024-12-13T14:25:16.415625368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 14:25:16.415650 env[1315]: time="2024-12-13T14:25:16.415637104Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 14:25:16.415732 env[1315]: time="2024-12-13T14:25:16.415648698Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 14:25:16.415732 env[1315]: time="2024-12-13T14:25:16.415666913Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 14:25:16.415732 env[1315]: time="2024-12-13T14:25:16.415680698Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 14:25:16.415732 env[1315]: time="2024-12-13T14:25:16.415701748Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 14:25:16.415825 env[1315]: time="2024-12-13T14:25:16.415758722Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 14:25:16.416044 env[1315]: time="2024-12-13T14:25:16.415979457Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 14:25:16.416044 env[1315]: time="2024-12-13T14:25:16.416042839Z" level=info msg="Connect containerd service" Dec 13 14:25:16.421375 env[1315]: time="2024-12-13T14:25:16.416083507Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 14:25:16.421375 env[1315]: time="2024-12-13T14:25:16.416750412Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:25:16.421375 env[1315]: time="2024-12-13T14:25:16.417017969Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 14:25:16.421375 env[1315]: time="2024-12-13T14:25:16.417054671Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 14:25:16.421375 env[1315]: time="2024-12-13T14:25:16.417096105Z" level=info msg="containerd successfully booted in 0.628287s" Dec 13 14:25:16.417307 systemd[1]: Started containerd.service. Dec 13 14:25:16.421728 env[1315]: time="2024-12-13T14:25:16.421686769Z" level=info msg="Start subscribing containerd event" Dec 13 14:25:16.424570 env[1315]: time="2024-12-13T14:25:16.424537988Z" level=info msg="Start recovering state" Dec 13 14:25:16.424789 env[1315]: time="2024-12-13T14:25:16.424769873Z" level=info msg="Start event monitor" Dec 13 14:25:16.424884 env[1315]: time="2024-12-13T14:25:16.424862429Z" level=info msg="Start snapshots syncer" Dec 13 14:25:16.424977 env[1315]: time="2024-12-13T14:25:16.424955601Z" level=info msg="Start cni network conf syncer for default" Dec 13 14:25:16.425061 env[1315]: time="2024-12-13T14:25:16.425040599Z" level=info msg="Start streaming server" Dec 13 14:25:16.432266 sshd_keygen[1305]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 14:25:16.473023 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 14:25:16.502628 systemd[1]: Finished sshd-keygen.service. Dec 13 14:25:16.528600 systemd[1]: Starting issuegen.service... Dec 13 14:25:16.534581 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 14:25:16.534820 systemd[1]: Finished issuegen.service. Dec 13 14:25:16.538525 systemd[1]: Starting systemd-user-sessions.service... Dec 13 14:25:16.545034 systemd[1]: Finished systemd-user-sessions.service. Dec 13 14:25:16.547687 systemd[1]: Started getty@tty1.service. Dec 13 14:25:16.550502 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 14:25:16.554116 systemd[1]: Reached target getty.target. Dec 13 14:25:17.317977 systemd[1]: Started kubelet.service. Dec 13 14:25:17.319657 systemd[1]: Reached target multi-user.target. Dec 13 14:25:17.322684 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 14:25:17.334261 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 14:25:17.334519 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 14:25:17.335858 systemd[1]: Startup finished in 5.186s (kernel) + 6.178s (userspace) = 11.365s. Dec 13 14:25:18.078950 kubelet[1379]: E1213 14:25:18.078844 1379 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:25:18.080860 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:25:18.081042 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:25:20.018990 systemd[1]: Created slice system-sshd.slice. Dec 13 14:25:20.020265 systemd[1]: Started sshd@0-10.0.0.92:22-10.0.0.1:53212.service. Dec 13 14:25:20.060651 sshd[1390]: Accepted publickey for core from 10.0.0.1 port 53212 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg Dec 13 14:25:20.062559 sshd[1390]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:25:20.074690 systemd-logind[1295]: New session 1 of user core. Dec 13 14:25:20.075811 systemd[1]: Created slice user-500.slice. Dec 13 14:25:20.076880 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 14:25:20.088016 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 14:25:20.089334 systemd[1]: Starting user@500.service... Dec 13 14:25:20.093043 (systemd)[1394]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:25:20.187799 systemd[1394]: Queued start job for default target default.target. Dec 13 14:25:20.188045 systemd[1394]: Reached target paths.target. Dec 13 14:25:20.188065 systemd[1394]: Reached target sockets.target. Dec 13 14:25:20.188080 systemd[1394]: Reached target timers.target. Dec 13 14:25:20.188095 systemd[1394]: Reached target basic.target. Dec 13 14:25:20.188139 systemd[1394]: Reached target default.target. Dec 13 14:25:20.188165 systemd[1394]: Startup finished in 88ms. Dec 13 14:25:20.188417 systemd[1]: Started user@500.service. Dec 13 14:25:20.189870 systemd[1]: Started session-1.scope. Dec 13 14:25:20.243641 systemd[1]: Started sshd@1-10.0.0.92:22-10.0.0.1:53218.service. Dec 13 14:25:20.284484 sshd[1404]: Accepted publickey for core from 10.0.0.1 port 53218 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg Dec 13 14:25:20.285642 sshd[1404]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:25:20.289536 systemd-logind[1295]: New session 2 of user core. Dec 13 14:25:20.290302 systemd[1]: Started session-2.scope. Dec 13 14:25:20.346278 sshd[1404]: pam_unix(sshd:session): session closed for user core Dec 13 14:25:20.348846 systemd[1]: Started sshd@2-10.0.0.92:22-10.0.0.1:53230.service. Dec 13 14:25:20.349298 systemd[1]: sshd@1-10.0.0.92:22-10.0.0.1:53218.service: Deactivated successfully. Dec 13 14:25:20.350365 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 14:25:20.350426 systemd-logind[1295]: Session 2 logged out. Waiting for processes to exit. Dec 13 14:25:20.351335 systemd-logind[1295]: Removed session 2. Dec 13 14:25:20.388064 sshd[1409]: Accepted publickey for core from 10.0.0.1 port 53230 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg Dec 13 14:25:20.389317 sshd[1409]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:25:20.393193 systemd-logind[1295]: New session 3 of user core. Dec 13 14:25:20.394166 systemd[1]: Started session-3.scope. Dec 13 14:25:20.445335 sshd[1409]: pam_unix(sshd:session): session closed for user core Dec 13 14:25:20.448083 systemd[1]: Started sshd@3-10.0.0.92:22-10.0.0.1:53232.service. Dec 13 14:25:20.448564 systemd[1]: sshd@2-10.0.0.92:22-10.0.0.1:53230.service: Deactivated successfully. Dec 13 14:25:20.450206 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 14:25:20.450882 systemd-logind[1295]: Session 3 logged out. Waiting for processes to exit. Dec 13 14:25:20.452496 systemd-logind[1295]: Removed session 3. Dec 13 14:25:20.485187 sshd[1417]: Accepted publickey for core from 10.0.0.1 port 53232 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg Dec 13 14:25:20.486452 sshd[1417]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:25:20.489927 systemd-logind[1295]: New session 4 of user core. Dec 13 14:25:20.490734 systemd[1]: Started session-4.scope. Dec 13 14:25:20.546657 sshd[1417]: pam_unix(sshd:session): session closed for user core Dec 13 14:25:20.549933 systemd[1]: Started sshd@4-10.0.0.92:22-10.0.0.1:53238.service. Dec 13 14:25:20.550554 systemd[1]: sshd@3-10.0.0.92:22-10.0.0.1:53232.service: Deactivated successfully. Dec 13 14:25:20.551737 systemd-logind[1295]: Session 4 logged out. Waiting for processes to exit. Dec 13 14:25:20.551820 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 14:25:20.553230 systemd-logind[1295]: Removed session 4. Dec 13 14:25:20.587513 sshd[1424]: Accepted publickey for core from 10.0.0.1 port 53238 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg Dec 13 14:25:20.588859 sshd[1424]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:25:20.592531 systemd-logind[1295]: New session 5 of user core. Dec 13 14:25:20.593569 systemd[1]: Started session-5.scope. Dec 13 14:25:20.651794 sudo[1429]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 14:25:20.652010 sudo[1429]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:25:20.664503 systemd[1]: Starting coreos-metadata.service... Dec 13 14:25:20.671171 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 14:25:20.671466 systemd[1]: Finished coreos-metadata.service. Dec 13 14:25:21.406781 systemd[1]: Stopped kubelet.service. Dec 13 14:25:21.409004 systemd[1]: Starting kubelet.service... Dec 13 14:25:21.429007 systemd[1]: Reloading. Dec 13 14:25:21.492831 /usr/lib/systemd/system-generators/torcx-generator[1500]: time="2024-12-13T14:25:21Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:25:21.493234 /usr/lib/systemd/system-generators/torcx-generator[1500]: time="2024-12-13T14:25:21Z" level=info msg="torcx already run" Dec 13 14:25:21.727235 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:25:21.727257 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:25:21.744850 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:25:21.831388 systemd[1]: Started kubelet.service. Dec 13 14:25:21.833555 systemd[1]: Stopping kubelet.service... Dec 13 14:25:21.834100 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:25:21.834407 systemd[1]: Stopped kubelet.service. Dec 13 14:25:21.836034 systemd[1]: Starting kubelet.service... Dec 13 14:25:21.913926 systemd[1]: Started kubelet.service. Dec 13 14:25:22.000543 kubelet[1560]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:25:22.000543 kubelet[1560]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:25:22.000543 kubelet[1560]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:25:22.000543 kubelet[1560]: I1213 14:25:22.000477 1560 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:25:22.558929 kubelet[1560]: I1213 14:25:22.558874 1560 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 14:25:22.558929 kubelet[1560]: I1213 14:25:22.558920 1560 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:25:22.559243 kubelet[1560]: I1213 14:25:22.559225 1560 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 14:25:22.581977 kubelet[1560]: I1213 14:25:22.581937 1560 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:25:22.596740 kubelet[1560]: I1213 14:25:22.596681 1560 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:25:22.599228 kubelet[1560]: I1213 14:25:22.599207 1560 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:25:22.599396 kubelet[1560]: I1213 14:25:22.599380 1560 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:25:22.599498 kubelet[1560]: I1213 14:25:22.599405 1560 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:25:22.599498 kubelet[1560]: I1213 14:25:22.599414 1560 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:25:22.599543 kubelet[1560]: I1213 14:25:22.599510 1560 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:25:22.599629 kubelet[1560]: I1213 14:25:22.599613 1560 kubelet.go:396] "Attempting to sync node with API server" Dec 13 14:25:22.599656 kubelet[1560]: I1213 14:25:22.599633 1560 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:25:22.599676 kubelet[1560]: I1213 14:25:22.599658 1560 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:25:22.599696 kubelet[1560]: I1213 14:25:22.599683 1560 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:25:22.599906 kubelet[1560]: E1213 14:25:22.599847 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:22.600469 kubelet[1560]: E1213 14:25:22.600428 1560 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:22.600954 kubelet[1560]: I1213 14:25:22.600926 1560 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:25:22.603295 kubelet[1560]: I1213 14:25:22.603276 1560 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:25:22.605357 kubelet[1560]: W1213 14:25:22.605333 1560 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 14:25:22.605893 kubelet[1560]: I1213 14:25:22.605871 1560 server.go:1256] "Started kubelet" Dec 13 14:25:22.606782 kubelet[1560]: I1213 14:25:22.606199 1560 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:25:22.606782 kubelet[1560]: I1213 14:25:22.606561 1560 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:25:22.606782 kubelet[1560]: I1213 14:25:22.606622 1560 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:25:22.608636 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 14:25:22.609202 kubelet[1560]: I1213 14:25:22.609153 1560 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:25:22.609514 kubelet[1560]: I1213 14:25:22.609472 1560 server.go:461] "Adding debug handlers to kubelet server" Dec 13 14:25:22.616306 kubelet[1560]: E1213 14:25:22.616269 1560 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.92\" not found" Dec 13 14:25:22.616306 kubelet[1560]: I1213 14:25:22.616292 1560 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:25:22.616468 kubelet[1560]: I1213 14:25:22.616390 1560 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 14:25:22.616499 kubelet[1560]: I1213 14:25:22.616490 1560 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 14:25:22.621592 kubelet[1560]: W1213 14:25:22.621570 1560 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "10.0.0.92" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 14:25:22.621730 kubelet[1560]: E1213 14:25:22.621696 1560 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.92" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 14:25:22.621863 kubelet[1560]: W1213 14:25:22.621844 1560 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 14:25:22.621941 kubelet[1560]: E1213 14:25:22.621927 1560 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 14:25:22.622297 kubelet[1560]: I1213 14:25:22.622283 1560 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:25:22.622451 kubelet[1560]: I1213 14:25:22.622429 1560 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:25:22.623527 kubelet[1560]: E1213 14:25:22.623514 1560 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:25:22.624129 kubelet[1560]: I1213 14:25:22.624116 1560 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:25:22.641122 kubelet[1560]: E1213 14:25:22.641092 1560 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.92\" not found" node="10.0.0.92" Dec 13 14:25:22.642480 kubelet[1560]: I1213 14:25:22.642450 1560 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:25:22.642480 kubelet[1560]: I1213 14:25:22.642474 1560 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:25:22.642550 kubelet[1560]: I1213 14:25:22.642501 1560 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:25:22.717773 kubelet[1560]: I1213 14:25:22.717704 1560 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.92" Dec 13 14:25:22.792419 kubelet[1560]: I1213 14:25:22.792355 1560 kubelet_node_status.go:76] "Successfully registered node" node="10.0.0.92" Dec 13 14:25:22.858894 kubelet[1560]: E1213 14:25:22.858747 1560 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.92\" not found" Dec 13 14:25:22.868762 kubelet[1560]: I1213 14:25:22.868703 1560 policy_none.go:49] "None policy: Start" Dec 13 14:25:22.869654 kubelet[1560]: I1213 14:25:22.869631 1560 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:25:22.869654 kubelet[1560]: I1213 14:25:22.869657 1560 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:25:22.880770 kubelet[1560]: I1213 14:25:22.880689 1560 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:25:22.880980 kubelet[1560]: I1213 14:25:22.880952 1560 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:25:22.883091 kubelet[1560]: E1213 14:25:22.883056 1560 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.92\" not found" Dec 13 14:25:22.917318 kubelet[1560]: I1213 14:25:22.917282 1560 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:25:22.918522 kubelet[1560]: I1213 14:25:22.918490 1560 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:25:22.918598 kubelet[1560]: I1213 14:25:22.918550 1560 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:25:22.918598 kubelet[1560]: I1213 14:25:22.918593 1560 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 14:25:22.918658 kubelet[1560]: E1213 14:25:22.918648 1560 kubelet.go:2353] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 13 14:25:22.959380 kubelet[1560]: E1213 14:25:22.959318 1560 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.92\" not found" Dec 13 14:25:23.060668 kubelet[1560]: E1213 14:25:23.060441 1560 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.92\" not found" Dec 13 14:25:23.161592 kubelet[1560]: E1213 14:25:23.161434 1560 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.92\" not found" Dec 13 14:25:23.262388 kubelet[1560]: E1213 14:25:23.262311 1560 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.92\" not found" Dec 13 14:25:23.363409 kubelet[1560]: E1213 14:25:23.363339 1560 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.92\" not found" Dec 13 14:25:23.463662 kubelet[1560]: E1213 14:25:23.463472 1560 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.92\" not found" Dec 13 14:25:23.509860 sudo[1429]: pam_unix(sudo:session): session closed for user root Dec 13 14:25:23.511575 sshd[1424]: pam_unix(sshd:session): session closed for user core Dec 13 14:25:23.514312 systemd[1]: sshd@4-10.0.0.92:22-10.0.0.1:53238.service: Deactivated successfully. Dec 13 14:25:23.515278 systemd-logind[1295]: Session 5 logged out. Waiting for processes to exit. Dec 13 14:25:23.515295 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 14:25:23.516257 systemd-logind[1295]: Removed session 5. Dec 13 14:25:23.561364 kubelet[1560]: I1213 14:25:23.561306 1560 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 14:25:23.561578 kubelet[1560]: W1213 14:25:23.561540 1560 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 14:25:23.561638 kubelet[1560]: W1213 14:25:23.561540 1560 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.CSIDriver ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 14:25:23.564487 kubelet[1560]: E1213 14:25:23.564447 1560 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.92\" not found" Dec 13 14:25:23.601066 kubelet[1560]: E1213 14:25:23.600985 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:23.665547 kubelet[1560]: E1213 14:25:23.665489 1560 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.92\" not found" Dec 13 14:25:23.766797 kubelet[1560]: E1213 14:25:23.766595 1560 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.92\" not found" Dec 13 14:25:23.867823 kubelet[1560]: E1213 14:25:23.867709 1560 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.92\" not found" Dec 13 14:25:23.968912 kubelet[1560]: I1213 14:25:23.968877 1560 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 14:25:23.969305 env[1315]: time="2024-12-13T14:25:23.969251960Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 14:25:23.969627 kubelet[1560]: I1213 14:25:23.969539 1560 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 14:25:24.601622 kubelet[1560]: I1213 14:25:24.601579 1560 apiserver.go:52] "Watching apiserver" Dec 13 14:25:24.601622 kubelet[1560]: E1213 14:25:24.601603 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:24.605549 kubelet[1560]: I1213 14:25:24.605525 1560 topology_manager.go:215] "Topology Admit Handler" podUID="c1a1ede5-074f-4ad2-861a-434f3e54b111" podNamespace="kube-system" podName="kube-proxy-jb8k4" Dec 13 14:25:24.605612 kubelet[1560]: I1213 14:25:24.605603 1560 topology_manager.go:215] "Topology Admit Handler" podUID="6d3804ce-aca6-4530-9169-4cd9a87b7c3e" podNamespace="kube-system" podName="cilium-gmnmk" Dec 13 14:25:24.616765 kubelet[1560]: I1213 14:25:24.616735 1560 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 14:25:24.629822 kubelet[1560]: I1213 14:25:24.629773 1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6d3804ce-aca6-4530-9169-4cd9a87b7c3e-host-proc-sys-kernel\") pod \"cilium-gmnmk\" (UID: \"6d3804ce-aca6-4530-9169-4cd9a87b7c3e\") " pod="kube-system/cilium-gmnmk" Dec 13 14:25:24.629953 kubelet[1560]: I1213 14:25:24.629879 1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrlfh\" (UniqueName: \"kubernetes.io/projected/6d3804ce-aca6-4530-9169-4cd9a87b7c3e-kube-api-access-qrlfh\") pod \"cilium-gmnmk\" (UID: \"6d3804ce-aca6-4530-9169-4cd9a87b7c3e\") " pod="kube-system/cilium-gmnmk" Dec 13 14:25:24.629953 kubelet[1560]: I1213 14:25:24.629920 1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6d3804ce-aca6-4530-9169-4cd9a87b7c3e-clustermesh-secrets\") pod \"cilium-gmnmk\" (UID: \"6d3804ce-aca6-4530-9169-4cd9a87b7c3e\") " pod="kube-system/cilium-gmnmk" Dec 13 14:25:24.629953 kubelet[1560]: I1213 14:25:24.629955 1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6d3804ce-aca6-4530-9169-4cd9a87b7c3e-host-proc-sys-net\") pod \"cilium-gmnmk\" (UID: \"6d3804ce-aca6-4530-9169-4cd9a87b7c3e\") " pod="kube-system/cilium-gmnmk" Dec 13 14:25:24.630063 kubelet[1560]: I1213 14:25:24.629993 1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c1a1ede5-074f-4ad2-861a-434f3e54b111-lib-modules\") pod \"kube-proxy-jb8k4\" (UID: \"c1a1ede5-074f-4ad2-861a-434f3e54b111\") " pod="kube-system/kube-proxy-jb8k4" Dec 13 14:25:24.630087 kubelet[1560]: I1213 14:25:24.630063 1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxf2h\" (UniqueName: \"kubernetes.io/projected/c1a1ede5-074f-4ad2-861a-434f3e54b111-kube-api-access-cxf2h\") pod \"kube-proxy-jb8k4\" (UID: \"c1a1ede5-074f-4ad2-861a-434f3e54b111\") " pod="kube-system/kube-proxy-jb8k4" Dec 13 14:25:24.630135 kubelet[1560]: I1213 14:25:24.630118 1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6d3804ce-aca6-4530-9169-4cd9a87b7c3e-cilium-cgroup\") pod \"cilium-gmnmk\" (UID: \"6d3804ce-aca6-4530-9169-4cd9a87b7c3e\") " pod="kube-system/cilium-gmnmk" Dec 13 14:25:24.630205 kubelet[1560]: I1213 14:25:24.630151 1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6d3804ce-aca6-4530-9169-4cd9a87b7c3e-cni-path\") pod \"cilium-gmnmk\" (UID: \"6d3804ce-aca6-4530-9169-4cd9a87b7c3e\") " pod="kube-system/cilium-gmnmk" Dec 13 14:25:24.630205 kubelet[1560]: I1213 14:25:24.630177 1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6d3804ce-aca6-4530-9169-4cd9a87b7c3e-etc-cni-netd\") pod \"cilium-gmnmk\" (UID: \"6d3804ce-aca6-4530-9169-4cd9a87b7c3e\") " pod="kube-system/cilium-gmnmk" Dec 13 14:25:24.630250 kubelet[1560]: I1213 14:25:24.630210 1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d3804ce-aca6-4530-9169-4cd9a87b7c3e-xtables-lock\") pod \"cilium-gmnmk\" (UID: \"6d3804ce-aca6-4530-9169-4cd9a87b7c3e\") " pod="kube-system/cilium-gmnmk" Dec 13 14:25:24.630250 kubelet[1560]: I1213 14:25:24.630241 1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6d3804ce-aca6-4530-9169-4cd9a87b7c3e-bpf-maps\") pod \"cilium-gmnmk\" (UID: \"6d3804ce-aca6-4530-9169-4cd9a87b7c3e\") " pod="kube-system/cilium-gmnmk" Dec 13 14:25:24.630305 kubelet[1560]: I1213 14:25:24.630258 1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d3804ce-aca6-4530-9169-4cd9a87b7c3e-lib-modules\") pod \"cilium-gmnmk\" (UID: \"6d3804ce-aca6-4530-9169-4cd9a87b7c3e\") " pod="kube-system/cilium-gmnmk" Dec 13 14:25:24.630305 kubelet[1560]: I1213 14:25:24.630278 1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6d3804ce-aca6-4530-9169-4cd9a87b7c3e-cilium-config-path\") pod \"cilium-gmnmk\" (UID: \"6d3804ce-aca6-4530-9169-4cd9a87b7c3e\") " pod="kube-system/cilium-gmnmk" Dec 13 14:25:24.630387 kubelet[1560]: I1213 14:25:24.630316 1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c1a1ede5-074f-4ad2-861a-434f3e54b111-kube-proxy\") pod \"kube-proxy-jb8k4\" (UID: \"c1a1ede5-074f-4ad2-861a-434f3e54b111\") " pod="kube-system/kube-proxy-jb8k4" Dec 13 14:25:24.630387 kubelet[1560]: I1213 14:25:24.630347 1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c1a1ede5-074f-4ad2-861a-434f3e54b111-xtables-lock\") pod \"kube-proxy-jb8k4\" (UID: \"c1a1ede5-074f-4ad2-861a-434f3e54b111\") " pod="kube-system/kube-proxy-jb8k4" Dec 13 14:25:24.630387 kubelet[1560]: I1213 14:25:24.630382 1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6d3804ce-aca6-4530-9169-4cd9a87b7c3e-cilium-run\") pod \"cilium-gmnmk\" (UID: \"6d3804ce-aca6-4530-9169-4cd9a87b7c3e\") " pod="kube-system/cilium-gmnmk" Dec 13 14:25:24.630451 kubelet[1560]: I1213 14:25:24.630406 1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6d3804ce-aca6-4530-9169-4cd9a87b7c3e-hostproc\") pod \"cilium-gmnmk\" (UID: \"6d3804ce-aca6-4530-9169-4cd9a87b7c3e\") " pod="kube-system/cilium-gmnmk" Dec 13 14:25:24.630451 kubelet[1560]: I1213 14:25:24.630434 1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6d3804ce-aca6-4530-9169-4cd9a87b7c3e-hubble-tls\") pod \"cilium-gmnmk\" (UID: \"6d3804ce-aca6-4530-9169-4cd9a87b7c3e\") " pod="kube-system/cilium-gmnmk" Dec 13 14:25:24.908844 kubelet[1560]: E1213 14:25:24.907989 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:25:24.909006 env[1315]: time="2024-12-13T14:25:24.908672108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jb8k4,Uid:c1a1ede5-074f-4ad2-861a-434f3e54b111,Namespace:kube-system,Attempt:0,}" Dec 13 14:25:24.909907 kubelet[1560]: E1213 14:25:24.909871 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:25:24.910223 env[1315]: time="2024-12-13T14:25:24.910165992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gmnmk,Uid:6d3804ce-aca6-4530-9169-4cd9a87b7c3e,Namespace:kube-system,Attempt:0,}" Dec 13 14:25:25.601992 kubelet[1560]: E1213 14:25:25.601928 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:25.862871 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2936120957.mount: Deactivated successfully. Dec 13 14:25:25.870870 env[1315]: time="2024-12-13T14:25:25.870808460Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:25.875876 env[1315]: time="2024-12-13T14:25:25.875824368Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:25.876914 env[1315]: time="2024-12-13T14:25:25.876850261Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:25.878868 env[1315]: time="2024-12-13T14:25:25.878830167Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:25.879976 env[1315]: time="2024-12-13T14:25:25.879945632Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:25.881243 env[1315]: time="2024-12-13T14:25:25.881202157Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:25.885734 env[1315]: time="2024-12-13T14:25:25.885676828Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:25.888434 env[1315]: time="2024-12-13T14:25:25.888385869Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:25.902764 env[1315]: time="2024-12-13T14:25:25.902531930Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:25:25.902764 env[1315]: time="2024-12-13T14:25:25.902570235Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:25:25.902764 env[1315]: time="2024-12-13T14:25:25.902585848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:25:25.902764 env[1315]: time="2024-12-13T14:25:25.902702368Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ad144b238cb24c2ac99eb1bc65066c8e93f2b03e23a183e6f33a63929bef2838 pid=1616 runtime=io.containerd.runc.v2 Dec 13 14:25:25.907415 env[1315]: time="2024-12-13T14:25:25.905076125Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:25:25.907415 env[1315]: time="2024-12-13T14:25:25.905140936Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:25:25.907415 env[1315]: time="2024-12-13T14:25:25.905161821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:25:25.907415 env[1315]: time="2024-12-13T14:25:25.907141346Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cd816ba3f72fa13aa4fdd65fd8201af2bc71d5de75bdc0a5d722f35db43ee4e9 pid=1629 runtime=io.containerd.runc.v2 Dec 13 14:25:25.941843 env[1315]: time="2024-12-13T14:25:25.941801967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gmnmk,Uid:6d3804ce-aca6-4530-9169-4cd9a87b7c3e,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad144b238cb24c2ac99eb1bc65066c8e93f2b03e23a183e6f33a63929bef2838\"" Dec 13 14:25:25.943014 kubelet[1560]: E1213 14:25:25.942988 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:25:25.945659 env[1315]: time="2024-12-13T14:25:25.945596170Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 14:25:25.948651 env[1315]: time="2024-12-13T14:25:25.948591025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jb8k4,Uid:c1a1ede5-074f-4ad2-861a-434f3e54b111,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd816ba3f72fa13aa4fdd65fd8201af2bc71d5de75bdc0a5d722f35db43ee4e9\"" Dec 13 14:25:25.949637 kubelet[1560]: E1213 14:25:25.949609 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:25:26.603657 kubelet[1560]: E1213 14:25:26.603607 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:27.604468 kubelet[1560]: E1213 14:25:27.604401 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:28.605662 kubelet[1560]: E1213 14:25:28.605581 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:29.606272 kubelet[1560]: E1213 14:25:29.606230 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:30.606737 kubelet[1560]: E1213 14:25:30.606677 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:31.607744 kubelet[1560]: E1213 14:25:31.607679 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:32.608448 kubelet[1560]: E1213 14:25:32.608396 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:33.609527 kubelet[1560]: E1213 14:25:33.609465 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:34.610517 kubelet[1560]: E1213 14:25:34.610451 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:34.622766 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount925979360.mount: Deactivated successfully. Dec 13 14:25:35.611770 kubelet[1560]: E1213 14:25:35.611680 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:36.612990 kubelet[1560]: E1213 14:25:36.612920 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:37.613168 kubelet[1560]: E1213 14:25:37.613101 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:38.613686 kubelet[1560]: E1213 14:25:38.613575 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:38.748691 env[1315]: time="2024-12-13T14:25:38.748605954Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:38.751619 env[1315]: time="2024-12-13T14:25:38.751563449Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:38.755021 env[1315]: time="2024-12-13T14:25:38.754968255Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:38.755615 env[1315]: time="2024-12-13T14:25:38.755560031Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 14:25:38.756692 env[1315]: time="2024-12-13T14:25:38.756650743Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 14:25:38.757751 env[1315]: time="2024-12-13T14:25:38.757664912Z" level=info msg="CreateContainer within sandbox \"ad144b238cb24c2ac99eb1bc65066c8e93f2b03e23a183e6f33a63929bef2838\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:25:38.776121 env[1315]: time="2024-12-13T14:25:38.776032052Z" level=info msg="CreateContainer within sandbox \"ad144b238cb24c2ac99eb1bc65066c8e93f2b03e23a183e6f33a63929bef2838\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5ffcec342b5470eb7a002a2582aae2c6e493a9f5c7f442db6aa6bb599fd72142\"" Dec 13 14:25:38.777141 env[1315]: time="2024-12-13T14:25:38.777079998Z" level=info msg="StartContainer for \"5ffcec342b5470eb7a002a2582aae2c6e493a9f5c7f442db6aa6bb599fd72142\"" Dec 13 14:25:38.827636 env[1315]: time="2024-12-13T14:25:38.827570180Z" level=info msg="StartContainer for \"5ffcec342b5470eb7a002a2582aae2c6e493a9f5c7f442db6aa6bb599fd72142\" returns successfully" Dec 13 14:25:38.946379 kubelet[1560]: E1213 14:25:38.946249 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:25:39.337784 env[1315]: time="2024-12-13T14:25:39.337606312Z" level=info msg="shim disconnected" id=5ffcec342b5470eb7a002a2582aae2c6e493a9f5c7f442db6aa6bb599fd72142 Dec 13 14:25:39.337784 env[1315]: time="2024-12-13T14:25:39.337680476Z" level=warning msg="cleaning up after shim disconnected" id=5ffcec342b5470eb7a002a2582aae2c6e493a9f5c7f442db6aa6bb599fd72142 namespace=k8s.io Dec 13 14:25:39.337784 env[1315]: time="2024-12-13T14:25:39.337692582Z" level=info msg="cleaning up dead shim" Dec 13 14:25:39.344893 env[1315]: time="2024-12-13T14:25:39.344838342Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:25:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1740 runtime=io.containerd.runc.v2\n" Dec 13 14:25:39.614710 kubelet[1560]: E1213 14:25:39.614678 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:39.768458 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ffcec342b5470eb7a002a2582aae2c6e493a9f5c7f442db6aa6bb599fd72142-rootfs.mount: Deactivated successfully. Dec 13 14:25:39.949753 kubelet[1560]: E1213 14:25:39.949628 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:25:39.951452 env[1315]: time="2024-12-13T14:25:39.951402567Z" level=info msg="CreateContainer within sandbox \"ad144b238cb24c2ac99eb1bc65066c8e93f2b03e23a183e6f33a63929bef2838\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:25:40.046318 env[1315]: time="2024-12-13T14:25:40.046242481Z" level=info msg="CreateContainer within sandbox \"ad144b238cb24c2ac99eb1bc65066c8e93f2b03e23a183e6f33a63929bef2838\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"287f1726dcde39d6512cc45391ac24ecfcd78a57101ce4e74bafbaf24ad43cee\"" Dec 13 14:25:40.046994 env[1315]: time="2024-12-13T14:25:40.046945845Z" level=info msg="StartContainer for \"287f1726dcde39d6512cc45391ac24ecfcd78a57101ce4e74bafbaf24ad43cee\"" Dec 13 14:25:40.095960 env[1315]: time="2024-12-13T14:25:40.095888664Z" level=info msg="StartContainer for \"287f1726dcde39d6512cc45391ac24ecfcd78a57101ce4e74bafbaf24ad43cee\" returns successfully" Dec 13 14:25:40.103891 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:25:40.104423 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:25:40.104602 systemd[1]: Stopping systemd-sysctl.service... Dec 13 14:25:40.106123 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:25:40.115148 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:25:40.156629 env[1315]: time="2024-12-13T14:25:40.156559807Z" level=info msg="shim disconnected" id=287f1726dcde39d6512cc45391ac24ecfcd78a57101ce4e74bafbaf24ad43cee Dec 13 14:25:40.156629 env[1315]: time="2024-12-13T14:25:40.156622413Z" level=warning msg="cleaning up after shim disconnected" id=287f1726dcde39d6512cc45391ac24ecfcd78a57101ce4e74bafbaf24ad43cee namespace=k8s.io Dec 13 14:25:40.156629 env[1315]: time="2024-12-13T14:25:40.156635040Z" level=info msg="cleaning up dead shim" Dec 13 14:25:40.164140 env[1315]: time="2024-12-13T14:25:40.164073807Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:25:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1807 runtime=io.containerd.runc.v2\n" Dec 13 14:25:40.615314 kubelet[1560]: E1213 14:25:40.615256 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:40.768628 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-287f1726dcde39d6512cc45391ac24ecfcd78a57101ce4e74bafbaf24ad43cee-rootfs.mount: Deactivated successfully. Dec 13 14:25:40.768786 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3006672112.mount: Deactivated successfully. Dec 13 14:25:40.951992 kubelet[1560]: E1213 14:25:40.951892 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:25:40.953706 env[1315]: time="2024-12-13T14:25:40.953665286Z" level=info msg="CreateContainer within sandbox \"ad144b238cb24c2ac99eb1bc65066c8e93f2b03e23a183e6f33a63929bef2838\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:25:41.181597 env[1315]: time="2024-12-13T14:25:41.181547832Z" level=info msg="CreateContainer within sandbox \"ad144b238cb24c2ac99eb1bc65066c8e93f2b03e23a183e6f33a63929bef2838\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"956650c3eef9d5b46c6c51975a63a35e72eaadcd8bfcbd63c48890cb5f89ea4a\"" Dec 13 14:25:41.181966 env[1315]: time="2024-12-13T14:25:41.181915045Z" level=info msg="StartContainer for \"956650c3eef9d5b46c6c51975a63a35e72eaadcd8bfcbd63c48890cb5f89ea4a\"" Dec 13 14:25:41.192679 env[1315]: time="2024-12-13T14:25:41.192638510Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:41.258418 env[1315]: time="2024-12-13T14:25:41.258285881Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:41.262665 env[1315]: time="2024-12-13T14:25:41.262631443Z" level=info msg="StartContainer for \"956650c3eef9d5b46c6c51975a63a35e72eaadcd8bfcbd63c48890cb5f89ea4a\" returns successfully" Dec 13 14:25:41.393616 env[1315]: time="2024-12-13T14:25:41.393553641Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:41.615869 kubelet[1560]: E1213 14:25:41.615812 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:41.684895 env[1315]: time="2024-12-13T14:25:41.684829335Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:41.685320 env[1315]: time="2024-12-13T14:25:41.685290888Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 14:25:41.687089 env[1315]: time="2024-12-13T14:25:41.687056792Z" level=info msg="CreateContainer within sandbox \"cd816ba3f72fa13aa4fdd65fd8201af2bc71d5de75bdc0a5d722f35db43ee4e9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 14:25:41.690472 env[1315]: time="2024-12-13T14:25:41.690413606Z" level=info msg="shim disconnected" id=956650c3eef9d5b46c6c51975a63a35e72eaadcd8bfcbd63c48890cb5f89ea4a Dec 13 14:25:41.690472 env[1315]: time="2024-12-13T14:25:41.690461909Z" level=warning msg="cleaning up after shim disconnected" id=956650c3eef9d5b46c6c51975a63a35e72eaadcd8bfcbd63c48890cb5f89ea4a namespace=k8s.io Dec 13 14:25:41.690472 env[1315]: time="2024-12-13T14:25:41.690471710Z" level=info msg="cleaning up dead shim" Dec 13 14:25:41.697095 env[1315]: time="2024-12-13T14:25:41.697061665Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:25:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1863 runtime=io.containerd.runc.v2\n" Dec 13 14:25:41.717851 env[1315]: time="2024-12-13T14:25:41.717772345Z" level=info msg="CreateContainer within sandbox \"cd816ba3f72fa13aa4fdd65fd8201af2bc71d5de75bdc0a5d722f35db43ee4e9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f4f3352e7caafc67df5ba57a17daef1c4a9a3cc5da1eb426588af604dbc4504d\"" Dec 13 14:25:41.718435 env[1315]: time="2024-12-13T14:25:41.718395052Z" level=info msg="StartContainer for \"f4f3352e7caafc67df5ba57a17daef1c4a9a3cc5da1eb426588af604dbc4504d\"" Dec 13 14:25:41.769660 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-956650c3eef9d5b46c6c51975a63a35e72eaadcd8bfcbd63c48890cb5f89ea4a-rootfs.mount: Deactivated successfully. Dec 13 14:25:41.771705 env[1315]: time="2024-12-13T14:25:41.770615156Z" level=info msg="StartContainer for \"f4f3352e7caafc67df5ba57a17daef1c4a9a3cc5da1eb426588af604dbc4504d\" returns successfully" Dec 13 14:25:41.954646 kubelet[1560]: E1213 14:25:41.954526 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:25:41.956929 kubelet[1560]: E1213 14:25:41.956896 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:25:41.959140 env[1315]: time="2024-12-13T14:25:41.959093492Z" level=info msg="CreateContainer within sandbox \"ad144b238cb24c2ac99eb1bc65066c8e93f2b03e23a183e6f33a63929bef2838\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:25:42.053320 kubelet[1560]: I1213 14:25:42.053280 1560 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-jb8k4" podStartSLOduration=4.317668821 podStartE2EDuration="20.053193206s" podCreationTimestamp="2024-12-13 14:25:22 +0000 UTC" firstStartedPulling="2024-12-13 14:25:25.95002811 +0000 UTC m=+4.001974338" lastFinishedPulling="2024-12-13 14:25:41.685552495 +0000 UTC m=+19.737498723" observedRunningTime="2024-12-13 14:25:42.052905553 +0000 UTC m=+20.104851821" watchObservedRunningTime="2024-12-13 14:25:42.053193206 +0000 UTC m=+20.105139424" Dec 13 14:25:42.140287 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2684204430.mount: Deactivated successfully. Dec 13 14:25:42.143411 env[1315]: time="2024-12-13T14:25:42.143356705Z" level=info msg="CreateContainer within sandbox \"ad144b238cb24c2ac99eb1bc65066c8e93f2b03e23a183e6f33a63929bef2838\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"13142c396c3dc7a28787ad20786d4e5c0f44f82b645c3b9115cf1f9b96e92d14\"" Dec 13 14:25:42.144139 env[1315]: time="2024-12-13T14:25:42.144014385Z" level=info msg="StartContainer for \"13142c396c3dc7a28787ad20786d4e5c0f44f82b645c3b9115cf1f9b96e92d14\"" Dec 13 14:25:42.186530 env[1315]: time="2024-12-13T14:25:42.186475804Z" level=info msg="StartContainer for \"13142c396c3dc7a28787ad20786d4e5c0f44f82b645c3b9115cf1f9b96e92d14\" returns successfully" Dec 13 14:25:42.205188 env[1315]: time="2024-12-13T14:25:42.205064416Z" level=info msg="shim disconnected" id=13142c396c3dc7a28787ad20786d4e5c0f44f82b645c3b9115cf1f9b96e92d14 Dec 13 14:25:42.205188 env[1315]: time="2024-12-13T14:25:42.205116866Z" level=warning msg="cleaning up after shim disconnected" id=13142c396c3dc7a28787ad20786d4e5c0f44f82b645c3b9115cf1f9b96e92d14 namespace=k8s.io Dec 13 14:25:42.205188 env[1315]: time="2024-12-13T14:25:42.205126907Z" level=info msg="cleaning up dead shim" Dec 13 14:25:42.211581 env[1315]: time="2024-12-13T14:25:42.211546343Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:25:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2077 runtime=io.containerd.runc.v2\n" Dec 13 14:25:42.599819 kubelet[1560]: E1213 14:25:42.599682 1560 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:42.616833 kubelet[1560]: E1213 14:25:42.616799 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:42.768835 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-13142c396c3dc7a28787ad20786d4e5c0f44f82b645c3b9115cf1f9b96e92d14-rootfs.mount: Deactivated successfully. Dec 13 14:25:42.959804 kubelet[1560]: E1213 14:25:42.959766 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:25:42.959804 kubelet[1560]: E1213 14:25:42.959792 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:25:42.961550 env[1315]: time="2024-12-13T14:25:42.961510526Z" level=info msg="CreateContainer within sandbox \"ad144b238cb24c2ac99eb1bc65066c8e93f2b03e23a183e6f33a63929bef2838\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:25:42.976728 env[1315]: time="2024-12-13T14:25:42.976683950Z" level=info msg="CreateContainer within sandbox \"ad144b238cb24c2ac99eb1bc65066c8e93f2b03e23a183e6f33a63929bef2838\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"99396370e5c247fd9349e2156ca1a486cb23492005953d9074d6d42ebcb6092f\"" Dec 13 14:25:42.977106 env[1315]: time="2024-12-13T14:25:42.977083168Z" level=info msg="StartContainer for \"99396370e5c247fd9349e2156ca1a486cb23492005953d9074d6d42ebcb6092f\"" Dec 13 14:25:43.019445 env[1315]: time="2024-12-13T14:25:43.019382083Z" level=info msg="StartContainer for \"99396370e5c247fd9349e2156ca1a486cb23492005953d9074d6d42ebcb6092f\" returns successfully" Dec 13 14:25:43.167910 kubelet[1560]: I1213 14:25:43.167867 1560 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 14:25:43.329749 kernel: Initializing XFRM netlink socket Dec 13 14:25:43.617322 kubelet[1560]: E1213 14:25:43.617273 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:43.965655 kubelet[1560]: E1213 14:25:43.965521 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:25:43.979249 kubelet[1560]: I1213 14:25:43.979187 1560 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-gmnmk" podStartSLOduration=9.167179531 podStartE2EDuration="21.979124991s" podCreationTimestamp="2024-12-13 14:25:22 +0000 UTC" firstStartedPulling="2024-12-13 14:25:25.944121535 +0000 UTC m=+3.996067764" lastFinishedPulling="2024-12-13 14:25:38.756066986 +0000 UTC m=+16.808013224" observedRunningTime="2024-12-13 14:25:43.978829349 +0000 UTC m=+22.030775597" watchObservedRunningTime="2024-12-13 14:25:43.979124991 +0000 UTC m=+22.031071219" Dec 13 14:25:44.618055 kubelet[1560]: E1213 14:25:44.617994 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:44.647509 kubelet[1560]: I1213 14:25:44.647468 1560 topology_manager.go:215] "Topology Admit Handler" podUID="b84ff5e0-479d-4cb5-82c0-fded216f3bb7" podNamespace="default" podName="nginx-deployment-6d5f899847-pm2b7" Dec 13 14:25:44.679439 kubelet[1560]: I1213 14:25:44.679402 1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqspt\" (UniqueName: \"kubernetes.io/projected/b84ff5e0-479d-4cb5-82c0-fded216f3bb7-kube-api-access-kqspt\") pod \"nginx-deployment-6d5f899847-pm2b7\" (UID: \"b84ff5e0-479d-4cb5-82c0-fded216f3bb7\") " pod="default/nginx-deployment-6d5f899847-pm2b7" Dec 13 14:25:44.950602 systemd-networkd[1087]: cilium_host: Link UP Dec 13 14:25:44.950801 systemd-networkd[1087]: cilium_net: Link UP Dec 13 14:25:44.952287 systemd-networkd[1087]: cilium_net: Gained carrier Dec 13 14:25:44.952825 env[1315]: time="2024-12-13T14:25:44.952786312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-pm2b7,Uid:b84ff5e0-479d-4cb5-82c0-fded216f3bb7,Namespace:default,Attempt:0,}" Dec 13 14:25:44.953731 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 14:25:44.953789 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 14:25:44.953898 systemd-networkd[1087]: cilium_host: Gained carrier Dec 13 14:25:44.971477 kubelet[1560]: E1213 14:25:44.970830 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:25:45.042761 systemd-networkd[1087]: cilium_vxlan: Link UP Dec 13 14:25:45.042771 systemd-networkd[1087]: cilium_vxlan: Gained carrier Dec 13 14:25:45.163420 systemd-networkd[1087]: cilium_net: Gained IPv6LL Dec 13 14:25:45.237749 kernel: NET: Registered PF_ALG protocol family Dec 13 14:25:45.618940 kubelet[1560]: E1213 14:25:45.618892 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:45.796325 systemd-networkd[1087]: lxc_health: Link UP Dec 13 14:25:45.805355 systemd-networkd[1087]: lxc_health: Gained carrier Dec 13 14:25:45.805741 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:25:45.880964 systemd-networkd[1087]: cilium_host: Gained IPv6LL Dec 13 14:25:45.972471 kubelet[1560]: E1213 14:25:45.972417 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:25:46.001681 systemd-networkd[1087]: lxc18604f2d5f8b: Link UP Dec 13 14:25:46.007751 kernel: eth0: renamed from tmpfbc09 Dec 13 14:25:46.014610 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:25:46.014789 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc18604f2d5f8b: link becomes ready Dec 13 14:25:46.015008 systemd-networkd[1087]: lxc18604f2d5f8b: Gained carrier Dec 13 14:25:46.619849 kubelet[1560]: E1213 14:25:46.619778 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:46.649995 systemd-networkd[1087]: cilium_vxlan: Gained IPv6LL Dec 13 14:25:46.974061 kubelet[1560]: E1213 14:25:46.973917 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:25:47.160911 systemd-networkd[1087]: lxc_health: Gained IPv6LL Dec 13 14:25:47.620655 kubelet[1560]: E1213 14:25:47.620598 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:47.801923 systemd-networkd[1087]: lxc18604f2d5f8b: Gained IPv6LL Dec 13 14:25:47.975508 kubelet[1560]: E1213 14:25:47.975372 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:25:48.622334 kubelet[1560]: E1213 14:25:48.622253 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:48.977061 kubelet[1560]: E1213 14:25:48.976906 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:25:49.437703 env[1315]: time="2024-12-13T14:25:49.437619191Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:25:49.437703 env[1315]: time="2024-12-13T14:25:49.437656766Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:25:49.437703 env[1315]: time="2024-12-13T14:25:49.437668408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:25:49.438308 env[1315]: time="2024-12-13T14:25:49.437817713Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fbc094991d2db28594794d67bb06a09040f41aead396e5a7619934e419d0af4c pid=2627 runtime=io.containerd.runc.v2 Dec 13 14:25:49.459535 systemd-resolved[1230]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 14:25:49.483590 env[1315]: time="2024-12-13T14:25:49.483547277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-pm2b7,Uid:b84ff5e0-479d-4cb5-82c0-fded216f3bb7,Namespace:default,Attempt:0,} returns sandbox id \"fbc094991d2db28594794d67bb06a09040f41aead396e5a7619934e419d0af4c\"" Dec 13 14:25:49.485175 env[1315]: time="2024-12-13T14:25:49.485141326Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 14:25:49.623452 kubelet[1560]: E1213 14:25:49.623392 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:50.623992 kubelet[1560]: E1213 14:25:50.623950 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:51.624854 kubelet[1560]: E1213 14:25:51.624799 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:52.253218 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3518755264.mount: Deactivated successfully. Dec 13 14:25:52.625325 kubelet[1560]: E1213 14:25:52.625277 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:53.625983 kubelet[1560]: E1213 14:25:53.625910 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:54.067975 env[1315]: time="2024-12-13T14:25:54.067827920Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:54.070169 env[1315]: time="2024-12-13T14:25:54.070102949Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:54.072077 env[1315]: time="2024-12-13T14:25:54.072037123Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:54.073936 env[1315]: time="2024-12-13T14:25:54.073909036Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:25:54.074649 env[1315]: time="2024-12-13T14:25:54.074613660Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 14:25:54.076240 env[1315]: time="2024-12-13T14:25:54.076212691Z" level=info msg="CreateContainer within sandbox \"fbc094991d2db28594794d67bb06a09040f41aead396e5a7619934e419d0af4c\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 14:25:54.091478 env[1315]: time="2024-12-13T14:25:54.091423210Z" level=info msg="CreateContainer within sandbox \"fbc094991d2db28594794d67bb06a09040f41aead396e5a7619934e419d0af4c\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"554aa44c906244b138ca8bb7c460c5693aae35775c5fa57744dc3564f64bb3b6\"" Dec 13 14:25:54.091982 env[1315]: time="2024-12-13T14:25:54.091947442Z" level=info msg="StartContainer for \"554aa44c906244b138ca8bb7c460c5693aae35775c5fa57744dc3564f64bb3b6\"" Dec 13 14:25:54.134015 env[1315]: time="2024-12-13T14:25:54.133957533Z" level=info msg="StartContainer for \"554aa44c906244b138ca8bb7c460c5693aae35775c5fa57744dc3564f64bb3b6\" returns successfully" Dec 13 14:25:54.626324 kubelet[1560]: E1213 14:25:54.626267 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:54.998005 kubelet[1560]: I1213 14:25:54.997873 1560 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-pm2b7" podStartSLOduration=6.407732096 podStartE2EDuration="10.997820329s" podCreationTimestamp="2024-12-13 14:25:44 +0000 UTC" firstStartedPulling="2024-12-13 14:25:49.484873427 +0000 UTC m=+27.536819655" lastFinishedPulling="2024-12-13 14:25:54.07496165 +0000 UTC m=+32.126907888" observedRunningTime="2024-12-13 14:25:54.9974418 +0000 UTC m=+33.049388018" watchObservedRunningTime="2024-12-13 14:25:54.997820329 +0000 UTC m=+33.049766557" Dec 13 14:25:55.627287 kubelet[1560]: E1213 14:25:55.627206 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:56.627915 kubelet[1560]: E1213 14:25:56.627829 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:56.842060 kubelet[1560]: I1213 14:25:56.842012 1560 topology_manager.go:215] "Topology Admit Handler" podUID="aba1de73-8593-4ce9-9dca-fd5f1963c341" podNamespace="default" podName="nfs-server-provisioner-0" Dec 13 14:25:56.950211 kubelet[1560]: I1213 14:25:56.950023 1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/aba1de73-8593-4ce9-9dca-fd5f1963c341-data\") pod \"nfs-server-provisioner-0\" (UID: \"aba1de73-8593-4ce9-9dca-fd5f1963c341\") " pod="default/nfs-server-provisioner-0" Dec 13 14:25:56.950211 kubelet[1560]: I1213 14:25:56.950094 1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf2vs\" (UniqueName: \"kubernetes.io/projected/aba1de73-8593-4ce9-9dca-fd5f1963c341-kube-api-access-sf2vs\") pod \"nfs-server-provisioner-0\" (UID: \"aba1de73-8593-4ce9-9dca-fd5f1963c341\") " pod="default/nfs-server-provisioner-0" Dec 13 14:25:57.145861 env[1315]: time="2024-12-13T14:25:57.145812447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:aba1de73-8593-4ce9-9dca-fd5f1963c341,Namespace:default,Attempt:0,}" Dec 13 14:25:57.176541 systemd-networkd[1087]: lxcd200f6b49448: Link UP Dec 13 14:25:57.183739 kernel: eth0: renamed from tmp300b3 Dec 13 14:25:57.191281 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:25:57.191328 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcd200f6b49448: link becomes ready Dec 13 14:25:57.191586 systemd-networkd[1087]: lxcd200f6b49448: Gained carrier Dec 13 14:25:57.390231 env[1315]: time="2024-12-13T14:25:57.390158552Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:25:57.390389 env[1315]: time="2024-12-13T14:25:57.390199893Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:25:57.390389 env[1315]: time="2024-12-13T14:25:57.390212898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:25:57.390389 env[1315]: time="2024-12-13T14:25:57.390350135Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/300b322b6c83102406e46d1cd87ea5314df8f3d423daf806377127f169cfc584 pid=2755 runtime=io.containerd.runc.v2 Dec 13 14:25:57.415116 systemd-resolved[1230]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 14:25:57.439007 env[1315]: time="2024-12-13T14:25:57.438950496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:aba1de73-8593-4ce9-9dca-fd5f1963c341,Namespace:default,Attempt:0,} returns sandbox id \"300b322b6c83102406e46d1cd87ea5314df8f3d423daf806377127f169cfc584\"" Dec 13 14:25:57.440680 env[1315]: time="2024-12-13T14:25:57.440628270Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 14:25:57.628853 kubelet[1560]: E1213 14:25:57.628795 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:58.629857 kubelet[1560]: E1213 14:25:58.629785 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:25:58.808966 systemd-networkd[1087]: lxcd200f6b49448: Gained IPv6LL Dec 13 14:25:59.630969 kubelet[1560]: E1213 14:25:59.630898 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:00.631858 kubelet[1560]: E1213 14:26:00.631792 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:00.838301 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3929243150.mount: Deactivated successfully. Dec 13 14:26:01.009305 update_engine[1298]: I1213 14:26:01.009148 1298 update_attempter.cc:509] Updating boot flags... Dec 13 14:26:01.632185 kubelet[1560]: E1213 14:26:01.632112 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:02.600850 kubelet[1560]: E1213 14:26:02.600791 1560 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:02.633020 kubelet[1560]: E1213 14:26:02.632969 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:03.360497 env[1315]: time="2024-12-13T14:26:03.360424150Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:03.426634 env[1315]: time="2024-12-13T14:26:03.426568525Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:03.441738 env[1315]: time="2024-12-13T14:26:03.441644557Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:03.469917 env[1315]: time="2024-12-13T14:26:03.469879582Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:03.470595 env[1315]: time="2024-12-13T14:26:03.470568507Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Dec 13 14:26:03.472250 env[1315]: time="2024-12-13T14:26:03.472220934Z" level=info msg="CreateContainer within sandbox \"300b322b6c83102406e46d1cd87ea5314df8f3d423daf806377127f169cfc584\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 14:26:03.633615 kubelet[1560]: E1213 14:26:03.633566 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:03.786331 env[1315]: time="2024-12-13T14:26:03.786247302Z" level=info msg="CreateContainer within sandbox \"300b322b6c83102406e46d1cd87ea5314df8f3d423daf806377127f169cfc584\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"6a385aaac98c4ae584d77e3f3b95b48d84852935a394364b1499cc37d5d2401b\"" Dec 13 14:26:03.786887 env[1315]: time="2024-12-13T14:26:03.786857596Z" level=info msg="StartContainer for \"6a385aaac98c4ae584d77e3f3b95b48d84852935a394364b1499cc37d5d2401b\"" Dec 13 14:26:03.897456 env[1315]: time="2024-12-13T14:26:03.897104850Z" level=info msg="StartContainer for \"6a385aaac98c4ae584d77e3f3b95b48d84852935a394364b1499cc37d5d2401b\" returns successfully" Dec 13 14:26:04.634245 kubelet[1560]: E1213 14:26:04.634177 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:05.635044 kubelet[1560]: E1213 14:26:05.634965 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:06.635951 kubelet[1560]: E1213 14:26:06.635869 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:07.636479 kubelet[1560]: E1213 14:26:07.636404 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:08.637629 kubelet[1560]: E1213 14:26:08.637548 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:09.638737 kubelet[1560]: E1213 14:26:09.638639 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:10.639216 kubelet[1560]: E1213 14:26:10.639141 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:11.640360 kubelet[1560]: E1213 14:26:11.640304 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:12.640656 kubelet[1560]: E1213 14:26:12.640577 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:13.504145 kubelet[1560]: I1213 14:26:13.504080 1560 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=11.47340177 podStartE2EDuration="17.50402691s" podCreationTimestamp="2024-12-13 14:25:56 +0000 UTC" firstStartedPulling="2024-12-13 14:25:57.440198727 +0000 UTC m=+35.492144956" lastFinishedPulling="2024-12-13 14:26:03.470823868 +0000 UTC m=+41.522770096" observedRunningTime="2024-12-13 14:26:04.050176837 +0000 UTC m=+42.102123075" watchObservedRunningTime="2024-12-13 14:26:13.50402691 +0000 UTC m=+51.555973158" Dec 13 14:26:13.504417 kubelet[1560]: I1213 14:26:13.504362 1560 topology_manager.go:215] "Topology Admit Handler" podUID="3dd04ca0-bd59-4bb6-a29f-56468a846433" podNamespace="default" podName="test-pod-1" Dec 13 14:26:13.544449 kubelet[1560]: I1213 14:26:13.544341 1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3bcb482d-a923-4570-b565-6ae77c1d941a\" (UniqueName: \"kubernetes.io/nfs/3dd04ca0-bd59-4bb6-a29f-56468a846433-pvc-3bcb482d-a923-4570-b565-6ae77c1d941a\") pod \"test-pod-1\" (UID: \"3dd04ca0-bd59-4bb6-a29f-56468a846433\") " pod="default/test-pod-1" Dec 13 14:26:13.544449 kubelet[1560]: I1213 14:26:13.544418 1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h447m\" (UniqueName: \"kubernetes.io/projected/3dd04ca0-bd59-4bb6-a29f-56468a846433-kube-api-access-h447m\") pod \"test-pod-1\" (UID: \"3dd04ca0-bd59-4bb6-a29f-56468a846433\") " pod="default/test-pod-1" Dec 13 14:26:13.641544 kubelet[1560]: E1213 14:26:13.641478 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:13.665774 kernel: FS-Cache: Loaded Dec 13 14:26:13.711367 kernel: RPC: Registered named UNIX socket transport module. Dec 13 14:26:13.711539 kernel: RPC: Registered udp transport module. Dec 13 14:26:13.711560 kernel: RPC: Registered tcp transport module. Dec 13 14:26:13.711591 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 14:26:13.772772 kernel: FS-Cache: Netfs 'nfs' registered for caching Dec 13 14:26:13.959039 kernel: NFS: Registering the id_resolver key type Dec 13 14:26:13.959188 kernel: Key type id_resolver registered Dec 13 14:26:13.959214 kernel: Key type id_legacy registered Dec 13 14:26:13.983992 nfsidmap[2888]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Dec 13 14:26:13.986919 nfsidmap[2891]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Dec 13 14:26:14.107816 env[1315]: time="2024-12-13T14:26:14.107761708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:3dd04ca0-bd59-4bb6-a29f-56468a846433,Namespace:default,Attempt:0,}" Dec 13 14:26:14.132798 systemd-networkd[1087]: lxceae5f595f5b7: Link UP Dec 13 14:26:14.140744 kernel: eth0: renamed from tmpbae51 Dec 13 14:26:14.148536 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:26:14.148632 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxceae5f595f5b7: link becomes ready Dec 13 14:26:14.148125 systemd-networkd[1087]: lxceae5f595f5b7: Gained carrier Dec 13 14:26:14.330219 env[1315]: time="2024-12-13T14:26:14.330066667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:26:14.330219 env[1315]: time="2024-12-13T14:26:14.330105472Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:26:14.330219 env[1315]: time="2024-12-13T14:26:14.330115641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:26:14.330821 env[1315]: time="2024-12-13T14:26:14.330726384Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bae511b594feb9fdcc8b3ce29a713dfbf52bbce9dd49634176919617bc7ae004 pid=2924 runtime=io.containerd.runc.v2 Dec 13 14:26:14.349754 systemd-resolved[1230]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 14:26:14.371368 env[1315]: time="2024-12-13T14:26:14.371326638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:3dd04ca0-bd59-4bb6-a29f-56468a846433,Namespace:default,Attempt:0,} returns sandbox id \"bae511b594feb9fdcc8b3ce29a713dfbf52bbce9dd49634176919617bc7ae004\"" Dec 13 14:26:14.373050 env[1315]: time="2024-12-13T14:26:14.373031827Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 14:26:14.641973 kubelet[1560]: E1213 14:26:14.641914 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:15.003024 env[1315]: time="2024-12-13T14:26:15.002849964Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:15.005006 env[1315]: time="2024-12-13T14:26:15.004947075Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:15.006859 env[1315]: time="2024-12-13T14:26:15.006829860Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:15.008597 env[1315]: time="2024-12-13T14:26:15.008555023Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:15.009167 env[1315]: time="2024-12-13T14:26:15.009133285Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 14:26:15.011472 env[1315]: time="2024-12-13T14:26:15.011422583Z" level=info msg="CreateContainer within sandbox \"bae511b594feb9fdcc8b3ce29a713dfbf52bbce9dd49634176919617bc7ae004\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 14:26:15.026586 env[1315]: time="2024-12-13T14:26:15.026532156Z" level=info msg="CreateContainer within sandbox \"bae511b594feb9fdcc8b3ce29a713dfbf52bbce9dd49634176919617bc7ae004\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"6c0f9ec38fd3a7117c0651deeabaa5412fb1c91496f93687190671e1d12729c4\"" Dec 13 14:26:15.027366 env[1315]: time="2024-12-13T14:26:15.027338401Z" level=info msg="StartContainer for \"6c0f9ec38fd3a7117c0651deeabaa5412fb1c91496f93687190671e1d12729c4\"" Dec 13 14:26:15.070510 env[1315]: time="2024-12-13T14:26:15.070465080Z" level=info msg="StartContainer for \"6c0f9ec38fd3a7117c0651deeabaa5412fb1c91496f93687190671e1d12729c4\" returns successfully" Dec 13 14:26:15.384928 systemd-networkd[1087]: lxceae5f595f5b7: Gained IPv6LL Dec 13 14:26:15.643063 kubelet[1560]: E1213 14:26:15.642894 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:16.043099 kubelet[1560]: I1213 14:26:16.042953 1560 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=18.406048649 podStartE2EDuration="19.042912739s" podCreationTimestamp="2024-12-13 14:25:57 +0000 UTC" firstStartedPulling="2024-12-13 14:26:14.372573703 +0000 UTC m=+52.424519932" lastFinishedPulling="2024-12-13 14:26:15.009437794 +0000 UTC m=+53.061384022" observedRunningTime="2024-12-13 14:26:16.042598611 +0000 UTC m=+54.094544839" watchObservedRunningTime="2024-12-13 14:26:16.042912739 +0000 UTC m=+54.094858967" Dec 13 14:26:16.643819 kubelet[1560]: E1213 14:26:16.643744 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:17.644511 kubelet[1560]: E1213 14:26:17.644432 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:18.645737 kubelet[1560]: E1213 14:26:18.645595 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:19.413383 env[1315]: time="2024-12-13T14:26:19.413300693Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:26:19.421424 env[1315]: time="2024-12-13T14:26:19.421375325Z" level=info msg="StopContainer for \"99396370e5c247fd9349e2156ca1a486cb23492005953d9074d6d42ebcb6092f\" with timeout 2 (s)" Dec 13 14:26:19.422066 env[1315]: time="2024-12-13T14:26:19.422033315Z" level=info msg="Stop container \"99396370e5c247fd9349e2156ca1a486cb23492005953d9074d6d42ebcb6092f\" with signal terminated" Dec 13 14:26:19.427662 systemd-networkd[1087]: lxc_health: Link DOWN Dec 13 14:26:19.427675 systemd-networkd[1087]: lxc_health: Lost carrier Dec 13 14:26:19.477000 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-99396370e5c247fd9349e2156ca1a486cb23492005953d9074d6d42ebcb6092f-rootfs.mount: Deactivated successfully. Dec 13 14:26:19.590179 env[1315]: time="2024-12-13T14:26:19.590115698Z" level=info msg="shim disconnected" id=99396370e5c247fd9349e2156ca1a486cb23492005953d9074d6d42ebcb6092f Dec 13 14:26:19.590179 env[1315]: time="2024-12-13T14:26:19.590166414Z" level=warning msg="cleaning up after shim disconnected" id=99396370e5c247fd9349e2156ca1a486cb23492005953d9074d6d42ebcb6092f namespace=k8s.io Dec 13 14:26:19.590179 env[1315]: time="2024-12-13T14:26:19.590174589Z" level=info msg="cleaning up dead shim" Dec 13 14:26:19.596937 env[1315]: time="2024-12-13T14:26:19.596875029Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:26:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3055 runtime=io.containerd.runc.v2\n" Dec 13 14:26:19.645913 kubelet[1560]: E1213 14:26:19.645836 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:19.654340 env[1315]: time="2024-12-13T14:26:19.654268091Z" level=info msg="StopContainer for \"99396370e5c247fd9349e2156ca1a486cb23492005953d9074d6d42ebcb6092f\" returns successfully" Dec 13 14:26:19.655059 env[1315]: time="2024-12-13T14:26:19.655024197Z" level=info msg="StopPodSandbox for \"ad144b238cb24c2ac99eb1bc65066c8e93f2b03e23a183e6f33a63929bef2838\"" Dec 13 14:26:19.655117 env[1315]: time="2024-12-13T14:26:19.655093249Z" level=info msg="Container to stop \"99396370e5c247fd9349e2156ca1a486cb23492005953d9074d6d42ebcb6092f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:26:19.655117 env[1315]: time="2024-12-13T14:26:19.655107315Z" level=info msg="Container to stop \"5ffcec342b5470eb7a002a2582aae2c6e493a9f5c7f442db6aa6bb599fd72142\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:26:19.655195 env[1315]: time="2024-12-13T14:26:19.655116774Z" level=info msg="Container to stop \"287f1726dcde39d6512cc45391ac24ecfcd78a57101ce4e74bafbaf24ad43cee\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:26:19.655195 env[1315]: time="2024-12-13T14:26:19.655127544Z" level=info msg="Container to stop \"956650c3eef9d5b46c6c51975a63a35e72eaadcd8bfcbd63c48890cb5f89ea4a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:26:19.655195 env[1315]: time="2024-12-13T14:26:19.655136942Z" level=info msg="Container to stop \"13142c396c3dc7a28787ad20786d4e5c0f44f82b645c3b9115cf1f9b96e92d14\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:26:19.657044 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ad144b238cb24c2ac99eb1bc65066c8e93f2b03e23a183e6f33a63929bef2838-shm.mount: Deactivated successfully. Dec 13 14:26:19.674567 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ad144b238cb24c2ac99eb1bc65066c8e93f2b03e23a183e6f33a63929bef2838-rootfs.mount: Deactivated successfully. Dec 13 14:26:19.889782 env[1315]: time="2024-12-13T14:26:19.889711092Z" level=info msg="shim disconnected" id=ad144b238cb24c2ac99eb1bc65066c8e93f2b03e23a183e6f33a63929bef2838 Dec 13 14:26:19.889782 env[1315]: time="2024-12-13T14:26:19.889775855Z" level=warning msg="cleaning up after shim disconnected" id=ad144b238cb24c2ac99eb1bc65066c8e93f2b03e23a183e6f33a63929bef2838 namespace=k8s.io Dec 13 14:26:19.889782 env[1315]: time="2024-12-13T14:26:19.889784341Z" level=info msg="cleaning up dead shim" Dec 13 14:26:19.896325 env[1315]: time="2024-12-13T14:26:19.896257490Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:26:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3087 runtime=io.containerd.runc.v2\n" Dec 13 14:26:19.896682 env[1315]: time="2024-12-13T14:26:19.896651347Z" level=info msg="TearDown network for sandbox \"ad144b238cb24c2ac99eb1bc65066c8e93f2b03e23a183e6f33a63929bef2838\" successfully" Dec 13 14:26:19.896682 env[1315]: time="2024-12-13T14:26:19.896676185Z" level=info msg="StopPodSandbox for \"ad144b238cb24c2ac99eb1bc65066c8e93f2b03e23a183e6f33a63929bef2838\" returns successfully" Dec 13 14:26:19.984634 kubelet[1560]: I1213 14:26:19.984179 1560 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6d3804ce-aca6-4530-9169-4cd9a87b7c3e-cilium-config-path\") pod \"6d3804ce-aca6-4530-9169-4cd9a87b7c3e\" (UID: \"6d3804ce-aca6-4530-9169-4cd9a87b7c3e\") " Dec 13 14:26:19.984634 kubelet[1560]: I1213 14:26:19.984232 1560 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6d3804ce-aca6-4530-9169-4cd9a87b7c3e-hostproc\") pod \"6d3804ce-aca6-4530-9169-4cd9a87b7c3e\" (UID: \"6d3804ce-aca6-4530-9169-4cd9a87b7c3e\") " Dec 13 14:26:19.984634 kubelet[1560]: I1213 14:26:19.984251 1560 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6d3804ce-aca6-4530-9169-4cd9a87b7c3e-cilium-cgroup\") pod \"6d3804ce-aca6-4530-9169-4cd9a87b7c3e\" (UID: \"6d3804ce-aca6-4530-9169-4cd9a87b7c3e\") " Dec 13 14:26:19.984634 kubelet[1560]: I1213 14:26:19.984270 1560 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d3804ce-aca6-4530-9169-4cd9a87b7c3e-lib-modules\") pod \"6d3804ce-aca6-4530-9169-4cd9a87b7c3e\" (UID: \"6d3804ce-aca6-4530-9169-4cd9a87b7c3e\") " Dec 13 14:26:19.984634 kubelet[1560]: I1213 14:26:19.984285 1560 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6d3804ce-aca6-4530-9169-4cd9a87b7c3e-etc-cni-netd\") pod \"6d3804ce-aca6-4530-9169-4cd9a87b7c3e\" (UID: \"6d3804ce-aca6-4530-9169-4cd9a87b7c3e\") " Dec 13 14:26:19.984634 kubelet[1560]: I1213 14:26:19.984301 1560 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6d3804ce-aca6-4530-9169-4cd9a87b7c3e-cilium-run\") pod \"6d3804ce-aca6-4530-9169-4cd9a87b7c3e\" (UID: \"6d3804ce-aca6-4530-9169-4cd9a87b7c3e\") " Dec 13 14:26:19.985073 kubelet[1560]: I1213 14:26:19.984342 1560 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6d3804ce-aca6-4530-9169-4cd9a87b7c3e-host-proc-sys-kernel\") pod \"6d3804ce-aca6-4530-9169-4cd9a87b7c3e\" (UID: \"6d3804ce-aca6-4530-9169-4cd9a87b7c3e\") " Dec 13 14:26:19.985073 kubelet[1560]: I1213 14:26:19.984366 1560 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrlfh\" (UniqueName: \"kubernetes.io/projected/6d3804ce-aca6-4530-9169-4cd9a87b7c3e-kube-api-access-qrlfh\") pod \"6d3804ce-aca6-4530-9169-4cd9a87b7c3e\" (UID: \"6d3804ce-aca6-4530-9169-4cd9a87b7c3e\") " Dec 13 14:26:19.985073 kubelet[1560]: I1213 14:26:19.984392 1560 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6d3804ce-aca6-4530-9169-4cd9a87b7c3e-clustermesh-secrets\") pod \"6d3804ce-aca6-4530-9169-4cd9a87b7c3e\" (UID: \"6d3804ce-aca6-4530-9169-4cd9a87b7c3e\") " Dec 13 14:26:19.985073 kubelet[1560]: I1213 14:26:19.984379 1560 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d3804ce-aca6-4530-9169-4cd9a87b7c3e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "6d3804ce-aca6-4530-9169-4cd9a87b7c3e" (UID: "6d3804ce-aca6-4530-9169-4cd9a87b7c3e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:26:19.985073 kubelet[1560]: I1213 14:26:19.984411 1560 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6d3804ce-aca6-4530-9169-4cd9a87b7c3e-cni-path\") pod \"6d3804ce-aca6-4530-9169-4cd9a87b7c3e\" (UID: \"6d3804ce-aca6-4530-9169-4cd9a87b7c3e\") " Dec 13 14:26:19.985254 kubelet[1560]: I1213 14:26:19.984458 1560 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d3804ce-aca6-4530-9169-4cd9a87b7c3e-cni-path" (OuterVolumeSpecName: "cni-path") pod "6d3804ce-aca6-4530-9169-4cd9a87b7c3e" (UID: "6d3804ce-aca6-4530-9169-4cd9a87b7c3e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:26:19.985254 kubelet[1560]: I1213 14:26:19.984490 1560 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d3804ce-aca6-4530-9169-4cd9a87b7c3e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6d3804ce-aca6-4530-9169-4cd9a87b7c3e" (UID: "6d3804ce-aca6-4530-9169-4cd9a87b7c3e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:26:19.985254 kubelet[1560]: I1213 14:26:19.984506 1560 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d3804ce-aca6-4530-9169-4cd9a87b7c3e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "6d3804ce-aca6-4530-9169-4cd9a87b7c3e" (UID: "6d3804ce-aca6-4530-9169-4cd9a87b7c3e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:26:19.985254 kubelet[1560]: I1213 14:26:19.984505 1560 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6d3804ce-aca6-4530-9169-4cd9a87b7c3e-bpf-maps\") pod \"6d3804ce-aca6-4530-9169-4cd9a87b7c3e\" (UID: \"6d3804ce-aca6-4530-9169-4cd9a87b7c3e\") " Dec 13 14:26:19.985254 kubelet[1560]: I1213 14:26:19.984521 1560 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d3804ce-aca6-4530-9169-4cd9a87b7c3e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "6d3804ce-aca6-4530-9169-4cd9a87b7c3e" (UID: "6d3804ce-aca6-4530-9169-4cd9a87b7c3e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:26:19.985497 kubelet[1560]: I1213 14:26:19.984539 1560 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d3804ce-aca6-4530-9169-4cd9a87b7c3e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "6d3804ce-aca6-4530-9169-4cd9a87b7c3e" (UID: "6d3804ce-aca6-4530-9169-4cd9a87b7c3e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:26:19.985497 kubelet[1560]: I1213 14:26:19.984544 1560 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6d3804ce-aca6-4530-9169-4cd9a87b7c3e-host-proc-sys-net\") pod \"6d3804ce-aca6-4530-9169-4cd9a87b7c3e\" (UID: \"6d3804ce-aca6-4530-9169-4cd9a87b7c3e\") " Dec 13 14:26:19.985497 kubelet[1560]: I1213 14:26:19.984567 1560 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d3804ce-aca6-4530-9169-4cd9a87b7c3e-hostproc" (OuterVolumeSpecName: "hostproc") pod "6d3804ce-aca6-4530-9169-4cd9a87b7c3e" (UID: "6d3804ce-aca6-4530-9169-4cd9a87b7c3e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:26:19.985497 kubelet[1560]: I1213 14:26:19.984573 1560 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d3804ce-aca6-4530-9169-4cd9a87b7c3e-xtables-lock\") pod \"6d3804ce-aca6-4530-9169-4cd9a87b7c3e\" (UID: \"6d3804ce-aca6-4530-9169-4cd9a87b7c3e\") " Dec 13 14:26:19.985497 kubelet[1560]: I1213 14:26:19.984612 1560 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6d3804ce-aca6-4530-9169-4cd9a87b7c3e-hubble-tls\") pod \"6d3804ce-aca6-4530-9169-4cd9a87b7c3e\" (UID: \"6d3804ce-aca6-4530-9169-4cd9a87b7c3e\") " Dec 13 14:26:19.985497 kubelet[1560]: I1213 14:26:19.984651 1560 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6d3804ce-aca6-4530-9169-4cd9a87b7c3e-hostproc\") on node \"10.0.0.92\" DevicePath \"\"" Dec 13 14:26:19.985739 kubelet[1560]: I1213 14:26:19.984667 1560 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6d3804ce-aca6-4530-9169-4cd9a87b7c3e-cilium-cgroup\") on node \"10.0.0.92\" DevicePath \"\"" Dec 13 14:26:19.985739 kubelet[1560]: I1213 14:26:19.984679 1560 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6d3804ce-aca6-4530-9169-4cd9a87b7c3e-lib-modules\") on node \"10.0.0.92\" DevicePath \"\"" Dec 13 14:26:19.985739 kubelet[1560]: I1213 14:26:19.984694 1560 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6d3804ce-aca6-4530-9169-4cd9a87b7c3e-etc-cni-netd\") on node \"10.0.0.92\" DevicePath \"\"" Dec 13 14:26:19.985739 kubelet[1560]: I1213 14:26:19.984706 1560 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6d3804ce-aca6-4530-9169-4cd9a87b7c3e-cilium-run\") on node \"10.0.0.92\" DevicePath \"\"" Dec 13 14:26:19.985739 kubelet[1560]: I1213 14:26:19.984743 1560 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6d3804ce-aca6-4530-9169-4cd9a87b7c3e-host-proc-sys-kernel\") on node \"10.0.0.92\" DevicePath \"\"" Dec 13 14:26:19.985739 kubelet[1560]: I1213 14:26:19.984757 1560 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6d3804ce-aca6-4530-9169-4cd9a87b7c3e-cni-path\") on node \"10.0.0.92\" DevicePath \"\"" Dec 13 14:26:19.985739 kubelet[1560]: I1213 14:26:19.985621 1560 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d3804ce-aca6-4530-9169-4cd9a87b7c3e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "6d3804ce-aca6-4530-9169-4cd9a87b7c3e" (UID: "6d3804ce-aca6-4530-9169-4cd9a87b7c3e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:26:19.986004 kubelet[1560]: I1213 14:26:19.985676 1560 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d3804ce-aca6-4530-9169-4cd9a87b7c3e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "6d3804ce-aca6-4530-9169-4cd9a87b7c3e" (UID: "6d3804ce-aca6-4530-9169-4cd9a87b7c3e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:26:19.986004 kubelet[1560]: I1213 14:26:19.984581 1560 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6d3804ce-aca6-4530-9169-4cd9a87b7c3e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "6d3804ce-aca6-4530-9169-4cd9a87b7c3e" (UID: "6d3804ce-aca6-4530-9169-4cd9a87b7c3e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:26:19.987383 kubelet[1560]: I1213 14:26:19.987346 1560 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d3804ce-aca6-4530-9169-4cd9a87b7c3e-kube-api-access-qrlfh" (OuterVolumeSpecName: "kube-api-access-qrlfh") pod "6d3804ce-aca6-4530-9169-4cd9a87b7c3e" (UID: "6d3804ce-aca6-4530-9169-4cd9a87b7c3e"). InnerVolumeSpecName "kube-api-access-qrlfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:26:19.988027 kubelet[1560]: I1213 14:26:19.988005 1560 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d3804ce-aca6-4530-9169-4cd9a87b7c3e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6d3804ce-aca6-4530-9169-4cd9a87b7c3e" (UID: "6d3804ce-aca6-4530-9169-4cd9a87b7c3e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:26:19.989111 kubelet[1560]: I1213 14:26:19.989046 1560 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d3804ce-aca6-4530-9169-4cd9a87b7c3e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "6d3804ce-aca6-4530-9169-4cd9a87b7c3e" (UID: "6d3804ce-aca6-4530-9169-4cd9a87b7c3e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:26:19.989111 kubelet[1560]: I1213 14:26:19.989043 1560 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d3804ce-aca6-4530-9169-4cd9a87b7c3e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "6d3804ce-aca6-4530-9169-4cd9a87b7c3e" (UID: "6d3804ce-aca6-4530-9169-4cd9a87b7c3e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:26:19.989354 systemd[1]: var-lib-kubelet-pods-6d3804ce\x2daca6\x2d4530\x2d9169\x2d4cd9a87b7c3e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqrlfh.mount: Deactivated successfully. Dec 13 14:26:20.044299 kubelet[1560]: I1213 14:26:20.044264 1560 scope.go:117] "RemoveContainer" containerID="99396370e5c247fd9349e2156ca1a486cb23492005953d9074d6d42ebcb6092f" Dec 13 14:26:20.050213 env[1315]: time="2024-12-13T14:26:20.050173176Z" level=info msg="RemoveContainer for \"99396370e5c247fd9349e2156ca1a486cb23492005953d9074d6d42ebcb6092f\"" Dec 13 14:26:20.085685 kubelet[1560]: I1213 14:26:20.085627 1560 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-qrlfh\" (UniqueName: \"kubernetes.io/projected/6d3804ce-aca6-4530-9169-4cd9a87b7c3e-kube-api-access-qrlfh\") on node \"10.0.0.92\" DevicePath \"\"" Dec 13 14:26:20.085685 kubelet[1560]: I1213 14:26:20.085681 1560 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6d3804ce-aca6-4530-9169-4cd9a87b7c3e-clustermesh-secrets\") on node \"10.0.0.92\" DevicePath \"\"" Dec 13 14:26:20.085685 kubelet[1560]: I1213 14:26:20.085696 1560 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6d3804ce-aca6-4530-9169-4cd9a87b7c3e-bpf-maps\") on node \"10.0.0.92\" DevicePath \"\"" Dec 13 14:26:20.085978 kubelet[1560]: I1213 14:26:20.085725 1560 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6d3804ce-aca6-4530-9169-4cd9a87b7c3e-xtables-lock\") on node \"10.0.0.92\" DevicePath \"\"" Dec 13 14:26:20.085978 kubelet[1560]: I1213 14:26:20.085738 1560 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6d3804ce-aca6-4530-9169-4cd9a87b7c3e-hubble-tls\") on node \"10.0.0.92\" DevicePath \"\"" Dec 13 14:26:20.085978 kubelet[1560]: I1213 14:26:20.085750 1560 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6d3804ce-aca6-4530-9169-4cd9a87b7c3e-host-proc-sys-net\") on node \"10.0.0.92\" DevicePath \"\"" Dec 13 14:26:20.085978 kubelet[1560]: I1213 14:26:20.085761 1560 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6d3804ce-aca6-4530-9169-4cd9a87b7c3e-cilium-config-path\") on node \"10.0.0.92\" DevicePath \"\"" Dec 13 14:26:20.100103 env[1315]: time="2024-12-13T14:26:20.100052082Z" level=info msg="RemoveContainer for \"99396370e5c247fd9349e2156ca1a486cb23492005953d9074d6d42ebcb6092f\" returns successfully" Dec 13 14:26:20.100431 kubelet[1560]: I1213 14:26:20.100406 1560 scope.go:117] "RemoveContainer" containerID="13142c396c3dc7a28787ad20786d4e5c0f44f82b645c3b9115cf1f9b96e92d14" Dec 13 14:26:20.101544 env[1315]: time="2024-12-13T14:26:20.101507807Z" level=info msg="RemoveContainer for \"13142c396c3dc7a28787ad20786d4e5c0f44f82b645c3b9115cf1f9b96e92d14\"" Dec 13 14:26:20.152177 env[1315]: time="2024-12-13T14:26:20.152108493Z" level=info msg="RemoveContainer for \"13142c396c3dc7a28787ad20786d4e5c0f44f82b645c3b9115cf1f9b96e92d14\" returns successfully" Dec 13 14:26:20.152453 kubelet[1560]: I1213 14:26:20.152400 1560 scope.go:117] "RemoveContainer" containerID="956650c3eef9d5b46c6c51975a63a35e72eaadcd8bfcbd63c48890cb5f89ea4a" Dec 13 14:26:20.153633 env[1315]: time="2024-12-13T14:26:20.153593604Z" level=info msg="RemoveContainer for \"956650c3eef9d5b46c6c51975a63a35e72eaadcd8bfcbd63c48890cb5f89ea4a\"" Dec 13 14:26:20.240556 env[1315]: time="2024-12-13T14:26:20.240424525Z" level=info msg="RemoveContainer for \"956650c3eef9d5b46c6c51975a63a35e72eaadcd8bfcbd63c48890cb5f89ea4a\" returns successfully" Dec 13 14:26:20.240822 kubelet[1560]: I1213 14:26:20.240794 1560 scope.go:117] "RemoveContainer" containerID="287f1726dcde39d6512cc45391ac24ecfcd78a57101ce4e74bafbaf24ad43cee" Dec 13 14:26:20.241939 env[1315]: time="2024-12-13T14:26:20.241909715Z" level=info msg="RemoveContainer for \"287f1726dcde39d6512cc45391ac24ecfcd78a57101ce4e74bafbaf24ad43cee\"" Dec 13 14:26:20.342694 env[1315]: time="2024-12-13T14:26:20.342618182Z" level=info msg="RemoveContainer for \"287f1726dcde39d6512cc45391ac24ecfcd78a57101ce4e74bafbaf24ad43cee\" returns successfully" Dec 13 14:26:20.343046 kubelet[1560]: I1213 14:26:20.343016 1560 scope.go:117] "RemoveContainer" containerID="5ffcec342b5470eb7a002a2582aae2c6e493a9f5c7f442db6aa6bb599fd72142" Dec 13 14:26:20.344356 env[1315]: time="2024-12-13T14:26:20.344301018Z" level=info msg="RemoveContainer for \"5ffcec342b5470eb7a002a2582aae2c6e493a9f5c7f442db6aa6bb599fd72142\"" Dec 13 14:26:20.348854 env[1315]: time="2024-12-13T14:26:20.348771388Z" level=info msg="RemoveContainer for \"5ffcec342b5470eb7a002a2582aae2c6e493a9f5c7f442db6aa6bb599fd72142\" returns successfully" Dec 13 14:26:20.349114 kubelet[1560]: I1213 14:26:20.349079 1560 scope.go:117] "RemoveContainer" containerID="99396370e5c247fd9349e2156ca1a486cb23492005953d9074d6d42ebcb6092f" Dec 13 14:26:20.349573 env[1315]: time="2024-12-13T14:26:20.349431331Z" level=error msg="ContainerStatus for \"99396370e5c247fd9349e2156ca1a486cb23492005953d9074d6d42ebcb6092f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"99396370e5c247fd9349e2156ca1a486cb23492005953d9074d6d42ebcb6092f\": not found" Dec 13 14:26:20.349774 kubelet[1560]: E1213 14:26:20.349748 1560 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"99396370e5c247fd9349e2156ca1a486cb23492005953d9074d6d42ebcb6092f\": not found" containerID="99396370e5c247fd9349e2156ca1a486cb23492005953d9074d6d42ebcb6092f" Dec 13 14:26:20.349898 kubelet[1560]: I1213 14:26:20.349877 1560 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"99396370e5c247fd9349e2156ca1a486cb23492005953d9074d6d42ebcb6092f"} err="failed to get container status \"99396370e5c247fd9349e2156ca1a486cb23492005953d9074d6d42ebcb6092f\": rpc error: code = NotFound desc = an error occurred when try to find container \"99396370e5c247fd9349e2156ca1a486cb23492005953d9074d6d42ebcb6092f\": not found" Dec 13 14:26:20.349898 kubelet[1560]: I1213 14:26:20.349913 1560 scope.go:117] "RemoveContainer" containerID="13142c396c3dc7a28787ad20786d4e5c0f44f82b645c3b9115cf1f9b96e92d14" Dec 13 14:26:20.350260 env[1315]: time="2024-12-13T14:26:20.350139546Z" level=error msg="ContainerStatus for \"13142c396c3dc7a28787ad20786d4e5c0f44f82b645c3b9115cf1f9b96e92d14\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"13142c396c3dc7a28787ad20786d4e5c0f44f82b645c3b9115cf1f9b96e92d14\": not found" Dec 13 14:26:20.350331 kubelet[1560]: E1213 14:26:20.350285 1560 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"13142c396c3dc7a28787ad20786d4e5c0f44f82b645c3b9115cf1f9b96e92d14\": not found" containerID="13142c396c3dc7a28787ad20786d4e5c0f44f82b645c3b9115cf1f9b96e92d14" Dec 13 14:26:20.350331 kubelet[1560]: I1213 14:26:20.350310 1560 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"13142c396c3dc7a28787ad20786d4e5c0f44f82b645c3b9115cf1f9b96e92d14"} err="failed to get container status \"13142c396c3dc7a28787ad20786d4e5c0f44f82b645c3b9115cf1f9b96e92d14\": rpc error: code = NotFound desc = an error occurred when try to find container \"13142c396c3dc7a28787ad20786d4e5c0f44f82b645c3b9115cf1f9b96e92d14\": not found" Dec 13 14:26:20.350386 kubelet[1560]: I1213 14:26:20.350335 1560 scope.go:117] "RemoveContainer" containerID="956650c3eef9d5b46c6c51975a63a35e72eaadcd8bfcbd63c48890cb5f89ea4a" Dec 13 14:26:20.350566 env[1315]: time="2024-12-13T14:26:20.350498248Z" level=error msg="ContainerStatus for \"956650c3eef9d5b46c6c51975a63a35e72eaadcd8bfcbd63c48890cb5f89ea4a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"956650c3eef9d5b46c6c51975a63a35e72eaadcd8bfcbd63c48890cb5f89ea4a\": not found" Dec 13 14:26:20.350821 kubelet[1560]: E1213 14:26:20.350634 1560 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"956650c3eef9d5b46c6c51975a63a35e72eaadcd8bfcbd63c48890cb5f89ea4a\": not found" containerID="956650c3eef9d5b46c6c51975a63a35e72eaadcd8bfcbd63c48890cb5f89ea4a" Dec 13 14:26:20.350821 kubelet[1560]: I1213 14:26:20.350664 1560 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"956650c3eef9d5b46c6c51975a63a35e72eaadcd8bfcbd63c48890cb5f89ea4a"} err="failed to get container status \"956650c3eef9d5b46c6c51975a63a35e72eaadcd8bfcbd63c48890cb5f89ea4a\": rpc error: code = NotFound desc = an error occurred when try to find container \"956650c3eef9d5b46c6c51975a63a35e72eaadcd8bfcbd63c48890cb5f89ea4a\": not found" Dec 13 14:26:20.350821 kubelet[1560]: I1213 14:26:20.350675 1560 scope.go:117] "RemoveContainer" containerID="287f1726dcde39d6512cc45391ac24ecfcd78a57101ce4e74bafbaf24ad43cee" Dec 13 14:26:20.350928 env[1315]: time="2024-12-13T14:26:20.350827794Z" level=error msg="ContainerStatus for \"287f1726dcde39d6512cc45391ac24ecfcd78a57101ce4e74bafbaf24ad43cee\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"287f1726dcde39d6512cc45391ac24ecfcd78a57101ce4e74bafbaf24ad43cee\": not found" Dec 13 14:26:20.350962 kubelet[1560]: E1213 14:26:20.350949 1560 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"287f1726dcde39d6512cc45391ac24ecfcd78a57101ce4e74bafbaf24ad43cee\": not found" containerID="287f1726dcde39d6512cc45391ac24ecfcd78a57101ce4e74bafbaf24ad43cee" Dec 13 14:26:20.350994 kubelet[1560]: I1213 14:26:20.350971 1560 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"287f1726dcde39d6512cc45391ac24ecfcd78a57101ce4e74bafbaf24ad43cee"} err="failed to get container status \"287f1726dcde39d6512cc45391ac24ecfcd78a57101ce4e74bafbaf24ad43cee\": rpc error: code = NotFound desc = an error occurred when try to find container \"287f1726dcde39d6512cc45391ac24ecfcd78a57101ce4e74bafbaf24ad43cee\": not found" Dec 13 14:26:20.350994 kubelet[1560]: I1213 14:26:20.350980 1560 scope.go:117] "RemoveContainer" containerID="5ffcec342b5470eb7a002a2582aae2c6e493a9f5c7f442db6aa6bb599fd72142" Dec 13 14:26:20.351139 env[1315]: time="2024-12-13T14:26:20.351094390Z" level=error msg="ContainerStatus for \"5ffcec342b5470eb7a002a2582aae2c6e493a9f5c7f442db6aa6bb599fd72142\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5ffcec342b5470eb7a002a2582aae2c6e493a9f5c7f442db6aa6bb599fd72142\": not found" Dec 13 14:26:20.351241 kubelet[1560]: E1213 14:26:20.351219 1560 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5ffcec342b5470eb7a002a2582aae2c6e493a9f5c7f442db6aa6bb599fd72142\": not found" containerID="5ffcec342b5470eb7a002a2582aae2c6e493a9f5c7f442db6aa6bb599fd72142" Dec 13 14:26:20.351241 kubelet[1560]: I1213 14:26:20.351241 1560 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5ffcec342b5470eb7a002a2582aae2c6e493a9f5c7f442db6aa6bb599fd72142"} err="failed to get container status \"5ffcec342b5470eb7a002a2582aae2c6e493a9f5c7f442db6aa6bb599fd72142\": rpc error: code = NotFound desc = an error occurred when try to find container \"5ffcec342b5470eb7a002a2582aae2c6e493a9f5c7f442db6aa6bb599fd72142\": not found" Dec 13 14:26:20.364194 systemd[1]: var-lib-kubelet-pods-6d3804ce\x2daca6\x2d4530\x2d9169\x2d4cd9a87b7c3e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:26:20.364371 systemd[1]: var-lib-kubelet-pods-6d3804ce\x2daca6\x2d4530\x2d9169\x2d4cd9a87b7c3e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:26:20.646573 kubelet[1560]: E1213 14:26:20.646500 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:20.921657 kubelet[1560]: I1213 14:26:20.921503 1560 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="6d3804ce-aca6-4530-9169-4cd9a87b7c3e" path="/var/lib/kubelet/pods/6d3804ce-aca6-4530-9169-4cd9a87b7c3e/volumes" Dec 13 14:26:21.647354 kubelet[1560]: E1213 14:26:21.647283 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:21.953909 kubelet[1560]: I1213 14:26:21.953695 1560 topology_manager.go:215] "Topology Admit Handler" podUID="160f19ff-7cd2-496e-9947-68f0b4c00bda" podNamespace="kube-system" podName="cilium-operator-5cc964979-8n76k" Dec 13 14:26:21.953909 kubelet[1560]: E1213 14:26:21.953806 1560 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6d3804ce-aca6-4530-9169-4cd9a87b7c3e" containerName="mount-cgroup" Dec 13 14:26:21.953909 kubelet[1560]: E1213 14:26:21.953823 1560 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6d3804ce-aca6-4530-9169-4cd9a87b7c3e" containerName="mount-bpf-fs" Dec 13 14:26:21.953909 kubelet[1560]: E1213 14:26:21.953832 1560 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6d3804ce-aca6-4530-9169-4cd9a87b7c3e" containerName="cilium-agent" Dec 13 14:26:21.953909 kubelet[1560]: E1213 14:26:21.953842 1560 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6d3804ce-aca6-4530-9169-4cd9a87b7c3e" containerName="apply-sysctl-overwrites" Dec 13 14:26:21.953909 kubelet[1560]: E1213 14:26:21.953851 1560 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6d3804ce-aca6-4530-9169-4cd9a87b7c3e" containerName="clean-cilium-state" Dec 13 14:26:21.953909 kubelet[1560]: I1213 14:26:21.953884 1560 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d3804ce-aca6-4530-9169-4cd9a87b7c3e" containerName="cilium-agent" Dec 13 14:26:21.964089 kubelet[1560]: I1213 14:26:21.964024 1560 topology_manager.go:215] "Topology Admit Handler" podUID="05bc4a58-c82c-497c-af8e-a3f1b1db42a1" podNamespace="kube-system" podName="cilium-lvl6g" Dec 13 14:26:21.997002 kubelet[1560]: I1213 14:26:21.996936 1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-cilium-cgroup\") pod \"cilium-lvl6g\" (UID: \"05bc4a58-c82c-497c-af8e-a3f1b1db42a1\") " pod="kube-system/cilium-lvl6g" Dec 13 14:26:21.997002 kubelet[1560]: I1213 14:26:21.996990 1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-hostproc\") pod \"cilium-lvl6g\" (UID: \"05bc4a58-c82c-497c-af8e-a3f1b1db42a1\") " pod="kube-system/cilium-lvl6g" Dec 13 14:26:21.997213 kubelet[1560]: I1213 14:26:21.997018 1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpc7d\" (UniqueName: \"kubernetes.io/projected/160f19ff-7cd2-496e-9947-68f0b4c00bda-kube-api-access-tpc7d\") pod \"cilium-operator-5cc964979-8n76k\" (UID: \"160f19ff-7cd2-496e-9947-68f0b4c00bda\") " pod="kube-system/cilium-operator-5cc964979-8n76k" Dec 13 14:26:21.997213 kubelet[1560]: I1213 14:26:21.997094 1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-hubble-tls\") pod \"cilium-lvl6g\" (UID: \"05bc4a58-c82c-497c-af8e-a3f1b1db42a1\") " pod="kube-system/cilium-lvl6g" Dec 13 14:26:21.997336 kubelet[1560]: I1213 14:26:21.997215 1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-cilium-ipsec-secrets\") pod \"cilium-lvl6g\" (UID: \"05bc4a58-c82c-497c-af8e-a3f1b1db42a1\") " pod="kube-system/cilium-lvl6g" Dec 13 14:26:21.997336 kubelet[1560]: I1213 14:26:21.997239 1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-host-proc-sys-kernel\") pod \"cilium-lvl6g\" (UID: \"05bc4a58-c82c-497c-af8e-a3f1b1db42a1\") " pod="kube-system/cilium-lvl6g" Dec 13 14:26:21.997336 kubelet[1560]: I1213 14:26:21.997265 1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77fc5\" (UniqueName: \"kubernetes.io/projected/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-kube-api-access-77fc5\") pod \"cilium-lvl6g\" (UID: \"05bc4a58-c82c-497c-af8e-a3f1b1db42a1\") " pod="kube-system/cilium-lvl6g" Dec 13 14:26:21.997455 kubelet[1560]: I1213 14:26:21.997346 1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-clustermesh-secrets\") pod \"cilium-lvl6g\" (UID: \"05bc4a58-c82c-497c-af8e-a3f1b1db42a1\") " pod="kube-system/cilium-lvl6g" Dec 13 14:26:21.997455 kubelet[1560]: I1213 14:26:21.997395 1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-bpf-maps\") pod \"cilium-lvl6g\" (UID: \"05bc4a58-c82c-497c-af8e-a3f1b1db42a1\") " pod="kube-system/cilium-lvl6g" Dec 13 14:26:21.997455 kubelet[1560]: I1213 14:26:21.997426 1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-etc-cni-netd\") pod \"cilium-lvl6g\" (UID: \"05bc4a58-c82c-497c-af8e-a3f1b1db42a1\") " pod="kube-system/cilium-lvl6g" Dec 13 14:26:21.997561 kubelet[1560]: I1213 14:26:21.997458 1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-host-proc-sys-net\") pod \"cilium-lvl6g\" (UID: \"05bc4a58-c82c-497c-af8e-a3f1b1db42a1\") " pod="kube-system/cilium-lvl6g" Dec 13 14:26:21.997561 kubelet[1560]: I1213 14:26:21.997507 1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/160f19ff-7cd2-496e-9947-68f0b4c00bda-cilium-config-path\") pod \"cilium-operator-5cc964979-8n76k\" (UID: \"160f19ff-7cd2-496e-9947-68f0b4c00bda\") " pod="kube-system/cilium-operator-5cc964979-8n76k" Dec 13 14:26:21.997561 kubelet[1560]: I1213 14:26:21.997558 1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-cni-path\") pod \"cilium-lvl6g\" (UID: \"05bc4a58-c82c-497c-af8e-a3f1b1db42a1\") " pod="kube-system/cilium-lvl6g" Dec 13 14:26:21.997662 kubelet[1560]: I1213 14:26:21.997613 1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-cilium-config-path\") pod \"cilium-lvl6g\" (UID: \"05bc4a58-c82c-497c-af8e-a3f1b1db42a1\") " pod="kube-system/cilium-lvl6g" Dec 13 14:26:21.997705 kubelet[1560]: I1213 14:26:21.997666 1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-xtables-lock\") pod \"cilium-lvl6g\" (UID: \"05bc4a58-c82c-497c-af8e-a3f1b1db42a1\") " pod="kube-system/cilium-lvl6g" Dec 13 14:26:21.997771 kubelet[1560]: I1213 14:26:21.997727 1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-cilium-run\") pod \"cilium-lvl6g\" (UID: \"05bc4a58-c82c-497c-af8e-a3f1b1db42a1\") " pod="kube-system/cilium-lvl6g" Dec 13 14:26:21.997771 kubelet[1560]: I1213 14:26:21.997768 1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-lib-modules\") pod \"cilium-lvl6g\" (UID: \"05bc4a58-c82c-497c-af8e-a3f1b1db42a1\") " pod="kube-system/cilium-lvl6g" Dec 13 14:26:22.129191 kubelet[1560]: E1213 14:26:22.129143 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:26:22.129783 env[1315]: time="2024-12-13T14:26:22.129737002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lvl6g,Uid:05bc4a58-c82c-497c-af8e-a3f1b1db42a1,Namespace:kube-system,Attempt:0,}" Dec 13 14:26:22.144104 env[1315]: time="2024-12-13T14:26:22.144029124Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:26:22.144104 env[1315]: time="2024-12-13T14:26:22.144077035Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:26:22.144104 env[1315]: time="2024-12-13T14:26:22.144091893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:26:22.144343 env[1315]: time="2024-12-13T14:26:22.144289599Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f884afc4685fdf27f90ab649cdb2924ef749eb3c2f523e740a6fd93fde5db521 pid=3115 runtime=io.containerd.runc.v2 Dec 13 14:26:22.180101 env[1315]: time="2024-12-13T14:26:22.178667786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lvl6g,Uid:05bc4a58-c82c-497c-af8e-a3f1b1db42a1,Namespace:kube-system,Attempt:0,} returns sandbox id \"f884afc4685fdf27f90ab649cdb2924ef749eb3c2f523e740a6fd93fde5db521\"" Dec 13 14:26:22.180383 kubelet[1560]: E1213 14:26:22.179488 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:26:22.181616 env[1315]: time="2024-12-13T14:26:22.181558492Z" level=info msg="CreateContainer within sandbox \"f884afc4685fdf27f90ab649cdb2924ef749eb3c2f523e740a6fd93fde5db521\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:26:22.193664 env[1315]: time="2024-12-13T14:26:22.193585476Z" level=info msg="CreateContainer within sandbox \"f884afc4685fdf27f90ab649cdb2924ef749eb3c2f523e740a6fd93fde5db521\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c4eafd2e0d57333ddba867b168e89ea44fdd33f6ce51546593ffae8d78ec9199\"" Dec 13 14:26:22.194187 env[1315]: time="2024-12-13T14:26:22.194149095Z" level=info msg="StartContainer for \"c4eafd2e0d57333ddba867b168e89ea44fdd33f6ce51546593ffae8d78ec9199\"" Dec 13 14:26:22.234752 env[1315]: time="2024-12-13T14:26:22.234604766Z" level=info msg="StartContainer for \"c4eafd2e0d57333ddba867b168e89ea44fdd33f6ce51546593ffae8d78ec9199\" returns successfully" Dec 13 14:26:22.256637 kubelet[1560]: E1213 14:26:22.256604 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:26:22.257218 env[1315]: time="2024-12-13T14:26:22.257141433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-8n76k,Uid:160f19ff-7cd2-496e-9947-68f0b4c00bda,Namespace:kube-system,Attempt:0,}" Dec 13 14:26:22.268201 env[1315]: time="2024-12-13T14:26:22.268145586Z" level=info msg="shim disconnected" id=c4eafd2e0d57333ddba867b168e89ea44fdd33f6ce51546593ffae8d78ec9199 Dec 13 14:26:22.268201 env[1315]: time="2024-12-13T14:26:22.268197734Z" level=warning msg="cleaning up after shim disconnected" id=c4eafd2e0d57333ddba867b168e89ea44fdd33f6ce51546593ffae8d78ec9199 namespace=k8s.io Dec 13 14:26:22.268201 env[1315]: time="2024-12-13T14:26:22.268206691Z" level=info msg="cleaning up dead shim" Dec 13 14:26:22.275431 env[1315]: time="2024-12-13T14:26:22.275389602Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:26:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3198 runtime=io.containerd.runc.v2\n" Dec 13 14:26:22.279115 env[1315]: time="2024-12-13T14:26:22.279040580Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:26:22.279115 env[1315]: time="2024-12-13T14:26:22.279086387Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:26:22.279289 env[1315]: time="2024-12-13T14:26:22.279100014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:26:22.279585 env[1315]: time="2024-12-13T14:26:22.279537294Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0dc7e7c8ab15a1cf529760f2679754f95532c793e46ccc3775957e3f432767de pid=3217 runtime=io.containerd.runc.v2 Dec 13 14:26:22.323446 env[1315]: time="2024-12-13T14:26:22.322700041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-8n76k,Uid:160f19ff-7cd2-496e-9947-68f0b4c00bda,Namespace:kube-system,Attempt:0,} returns sandbox id \"0dc7e7c8ab15a1cf529760f2679754f95532c793e46ccc3775957e3f432767de\"" Dec 13 14:26:22.323599 kubelet[1560]: E1213 14:26:22.323281 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:26:22.324188 env[1315]: time="2024-12-13T14:26:22.324163829Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 14:26:22.600500 kubelet[1560]: E1213 14:26:22.600348 1560 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:22.624643 env[1315]: time="2024-12-13T14:26:22.624599847Z" level=info msg="StopPodSandbox for \"ad144b238cb24c2ac99eb1bc65066c8e93f2b03e23a183e6f33a63929bef2838\"" Dec 13 14:26:22.624802 env[1315]: time="2024-12-13T14:26:22.624690939Z" level=info msg="TearDown network for sandbox \"ad144b238cb24c2ac99eb1bc65066c8e93f2b03e23a183e6f33a63929bef2838\" successfully" Dec 13 14:26:22.624802 env[1315]: time="2024-12-13T14:26:22.624741165Z" level=info msg="StopPodSandbox for \"ad144b238cb24c2ac99eb1bc65066c8e93f2b03e23a183e6f33a63929bef2838\" returns successfully" Dec 13 14:26:22.625155 env[1315]: time="2024-12-13T14:26:22.625127749Z" level=info msg="RemovePodSandbox for \"ad144b238cb24c2ac99eb1bc65066c8e93f2b03e23a183e6f33a63929bef2838\"" Dec 13 14:26:22.625221 env[1315]: time="2024-12-13T14:26:22.625158446Z" level=info msg="Forcibly stopping sandbox \"ad144b238cb24c2ac99eb1bc65066c8e93f2b03e23a183e6f33a63929bef2838\"" Dec 13 14:26:22.625256 env[1315]: time="2024-12-13T14:26:22.625226075Z" level=info msg="TearDown network for sandbox \"ad144b238cb24c2ac99eb1bc65066c8e93f2b03e23a183e6f33a63929bef2838\" successfully" Dec 13 14:26:22.631289 env[1315]: time="2024-12-13T14:26:22.631234907Z" level=info msg="RemovePodSandbox \"ad144b238cb24c2ac99eb1bc65066c8e93f2b03e23a183e6f33a63929bef2838\" returns successfully" Dec 13 14:26:22.647829 kubelet[1560]: E1213 14:26:22.647791 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:22.897024 kubelet[1560]: E1213 14:26:22.896987 1560 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:26:23.051418 env[1315]: time="2024-12-13T14:26:23.051370648Z" level=info msg="StopPodSandbox for \"f884afc4685fdf27f90ab649cdb2924ef749eb3c2f523e740a6fd93fde5db521\"" Dec 13 14:26:23.051626 env[1315]: time="2024-12-13T14:26:23.051428428Z" level=info msg="Container to stop \"c4eafd2e0d57333ddba867b168e89ea44fdd33f6ce51546593ffae8d78ec9199\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:26:23.083104 env[1315]: time="2024-12-13T14:26:23.083057699Z" level=info msg="shim disconnected" id=f884afc4685fdf27f90ab649cdb2924ef749eb3c2f523e740a6fd93fde5db521 Dec 13 14:26:23.083104 env[1315]: time="2024-12-13T14:26:23.083102113Z" level=warning msg="cleaning up after shim disconnected" id=f884afc4685fdf27f90ab649cdb2924ef749eb3c2f523e740a6fd93fde5db521 namespace=k8s.io Dec 13 14:26:23.083104 env[1315]: time="2024-12-13T14:26:23.083110720Z" level=info msg="cleaning up dead shim" Dec 13 14:26:23.089269 env[1315]: time="2024-12-13T14:26:23.089234347Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:26:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3272 runtime=io.containerd.runc.v2\n" Dec 13 14:26:23.089560 env[1315]: time="2024-12-13T14:26:23.089528174Z" level=info msg="TearDown network for sandbox \"f884afc4685fdf27f90ab649cdb2924ef749eb3c2f523e740a6fd93fde5db521\" successfully" Dec 13 14:26:23.089560 env[1315]: time="2024-12-13T14:26:23.089550646Z" level=info msg="StopPodSandbox for \"f884afc4685fdf27f90ab649cdb2924ef749eb3c2f523e740a6fd93fde5db521\" returns successfully" Dec 13 14:26:23.104936 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f884afc4685fdf27f90ab649cdb2924ef749eb3c2f523e740a6fd93fde5db521-shm.mount: Deactivated successfully. Dec 13 14:26:23.105684 kubelet[1560]: I1213 14:26:23.105659 1560 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "05bc4a58-c82c-497c-af8e-a3f1b1db42a1" (UID: "05bc4a58-c82c-497c-af8e-a3f1b1db42a1"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:26:23.105856 kubelet[1560]: I1213 14:26:23.105838 1560 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-host-proc-sys-kernel\") pod \"05bc4a58-c82c-497c-af8e-a3f1b1db42a1\" (UID: \"05bc4a58-c82c-497c-af8e-a3f1b1db42a1\") " Dec 13 14:26:23.105976 kubelet[1560]: I1213 14:26:23.105960 1560 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77fc5\" (UniqueName: \"kubernetes.io/projected/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-kube-api-access-77fc5\") pod \"05bc4a58-c82c-497c-af8e-a3f1b1db42a1\" (UID: \"05bc4a58-c82c-497c-af8e-a3f1b1db42a1\") " Dec 13 14:26:23.106059 kubelet[1560]: I1213 14:26:23.105990 1560 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-hostproc\") pod \"05bc4a58-c82c-497c-af8e-a3f1b1db42a1\" (UID: \"05bc4a58-c82c-497c-af8e-a3f1b1db42a1\") " Dec 13 14:26:23.106059 kubelet[1560]: I1213 14:26:23.106010 1560 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-clustermesh-secrets\") pod \"05bc4a58-c82c-497c-af8e-a3f1b1db42a1\" (UID: \"05bc4a58-c82c-497c-af8e-a3f1b1db42a1\") " Dec 13 14:26:23.106059 kubelet[1560]: I1213 14:26:23.106025 1560 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-bpf-maps\") pod \"05bc4a58-c82c-497c-af8e-a3f1b1db42a1\" (UID: \"05bc4a58-c82c-497c-af8e-a3f1b1db42a1\") " Dec 13 14:26:23.106059 kubelet[1560]: I1213 14:26:23.106043 1560 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-hubble-tls\") pod \"05bc4a58-c82c-497c-af8e-a3f1b1db42a1\" (UID: \"05bc4a58-c82c-497c-af8e-a3f1b1db42a1\") " Dec 13 14:26:23.106059 kubelet[1560]: I1213 14:26:23.106059 1560 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-cilium-run\") pod \"05bc4a58-c82c-497c-af8e-a3f1b1db42a1\" (UID: \"05bc4a58-c82c-497c-af8e-a3f1b1db42a1\") " Dec 13 14:26:23.106230 kubelet[1560]: I1213 14:26:23.106073 1560 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-cilium-cgroup\") pod \"05bc4a58-c82c-497c-af8e-a3f1b1db42a1\" (UID: \"05bc4a58-c82c-497c-af8e-a3f1b1db42a1\") " Dec 13 14:26:23.106230 kubelet[1560]: I1213 14:26:23.106093 1560 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-cni-path\") pod \"05bc4a58-c82c-497c-af8e-a3f1b1db42a1\" (UID: \"05bc4a58-c82c-497c-af8e-a3f1b1db42a1\") " Dec 13 14:26:23.106230 kubelet[1560]: I1213 14:26:23.106120 1560 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-cilium-config-path\") pod \"05bc4a58-c82c-497c-af8e-a3f1b1db42a1\" (UID: \"05bc4a58-c82c-497c-af8e-a3f1b1db42a1\") " Dec 13 14:26:23.106230 kubelet[1560]: I1213 14:26:23.106145 1560 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-lib-modules\") pod \"05bc4a58-c82c-497c-af8e-a3f1b1db42a1\" (UID: \"05bc4a58-c82c-497c-af8e-a3f1b1db42a1\") " Dec 13 14:26:23.106230 kubelet[1560]: I1213 14:26:23.106170 1560 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-cilium-ipsec-secrets\") pod \"05bc4a58-c82c-497c-af8e-a3f1b1db42a1\" (UID: \"05bc4a58-c82c-497c-af8e-a3f1b1db42a1\") " Dec 13 14:26:23.106230 kubelet[1560]: I1213 14:26:23.106195 1560 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-host-proc-sys-net\") pod \"05bc4a58-c82c-497c-af8e-a3f1b1db42a1\" (UID: \"05bc4a58-c82c-497c-af8e-a3f1b1db42a1\") " Dec 13 14:26:23.106484 kubelet[1560]: I1213 14:26:23.106217 1560 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-etc-cni-netd\") pod \"05bc4a58-c82c-497c-af8e-a3f1b1db42a1\" (UID: \"05bc4a58-c82c-497c-af8e-a3f1b1db42a1\") " Dec 13 14:26:23.106484 kubelet[1560]: I1213 14:26:23.106239 1560 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-xtables-lock\") pod \"05bc4a58-c82c-497c-af8e-a3f1b1db42a1\" (UID: \"05bc4a58-c82c-497c-af8e-a3f1b1db42a1\") " Dec 13 14:26:23.106484 kubelet[1560]: I1213 14:26:23.106278 1560 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-host-proc-sys-kernel\") on node \"10.0.0.92\" DevicePath \"\"" Dec 13 14:26:23.106484 kubelet[1560]: I1213 14:26:23.106304 1560 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "05bc4a58-c82c-497c-af8e-a3f1b1db42a1" (UID: "05bc4a58-c82c-497c-af8e-a3f1b1db42a1"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:26:23.107261 kubelet[1560]: I1213 14:26:23.106626 1560 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "05bc4a58-c82c-497c-af8e-a3f1b1db42a1" (UID: "05bc4a58-c82c-497c-af8e-a3f1b1db42a1"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:26:23.107261 kubelet[1560]: I1213 14:26:23.106647 1560 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "05bc4a58-c82c-497c-af8e-a3f1b1db42a1" (UID: "05bc4a58-c82c-497c-af8e-a3f1b1db42a1"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:26:23.107261 kubelet[1560]: I1213 14:26:23.106662 1560 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-hostproc" (OuterVolumeSpecName: "hostproc") pod "05bc4a58-c82c-497c-af8e-a3f1b1db42a1" (UID: "05bc4a58-c82c-497c-af8e-a3f1b1db42a1"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:26:23.107261 kubelet[1560]: I1213 14:26:23.106681 1560 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-cni-path" (OuterVolumeSpecName: "cni-path") pod "05bc4a58-c82c-497c-af8e-a3f1b1db42a1" (UID: "05bc4a58-c82c-497c-af8e-a3f1b1db42a1"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:26:23.107261 kubelet[1560]: I1213 14:26:23.106759 1560 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "05bc4a58-c82c-497c-af8e-a3f1b1db42a1" (UID: "05bc4a58-c82c-497c-af8e-a3f1b1db42a1"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:26:23.107470 kubelet[1560]: I1213 14:26:23.106938 1560 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "05bc4a58-c82c-497c-af8e-a3f1b1db42a1" (UID: "05bc4a58-c82c-497c-af8e-a3f1b1db42a1"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:26:23.107470 kubelet[1560]: I1213 14:26:23.107390 1560 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "05bc4a58-c82c-497c-af8e-a3f1b1db42a1" (UID: "05bc4a58-c82c-497c-af8e-a3f1b1db42a1"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:26:23.107470 kubelet[1560]: I1213 14:26:23.107419 1560 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "05bc4a58-c82c-497c-af8e-a3f1b1db42a1" (UID: "05bc4a58-c82c-497c-af8e-a3f1b1db42a1"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:26:23.109666 kubelet[1560]: I1213 14:26:23.109628 1560 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "05bc4a58-c82c-497c-af8e-a3f1b1db42a1" (UID: "05bc4a58-c82c-497c-af8e-a3f1b1db42a1"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:26:23.109838 kubelet[1560]: I1213 14:26:23.109819 1560 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "05bc4a58-c82c-497c-af8e-a3f1b1db42a1" (UID: "05bc4a58-c82c-497c-af8e-a3f1b1db42a1"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:26:23.111087 systemd[1]: var-lib-kubelet-pods-05bc4a58\x2dc82c\x2d497c\x2daf8e\x2da3f1b1db42a1-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:26:23.111804 kubelet[1560]: I1213 14:26:23.111190 1560 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "05bc4a58-c82c-497c-af8e-a3f1b1db42a1" (UID: "05bc4a58-c82c-497c-af8e-a3f1b1db42a1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:26:23.111227 systemd[1]: var-lib-kubelet-pods-05bc4a58\x2dc82c\x2d497c\x2daf8e\x2da3f1b1db42a1-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 14:26:23.112216 kubelet[1560]: I1213 14:26:23.112196 1560 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "05bc4a58-c82c-497c-af8e-a3f1b1db42a1" (UID: "05bc4a58-c82c-497c-af8e-a3f1b1db42a1"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:26:23.113060 kubelet[1560]: I1213 14:26:23.113025 1560 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-kube-api-access-77fc5" (OuterVolumeSpecName: "kube-api-access-77fc5") pod "05bc4a58-c82c-497c-af8e-a3f1b1db42a1" (UID: "05bc4a58-c82c-497c-af8e-a3f1b1db42a1"). InnerVolumeSpecName "kube-api-access-77fc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:26:23.113532 systemd[1]: var-lib-kubelet-pods-05bc4a58\x2dc82c\x2d497c\x2daf8e\x2da3f1b1db42a1-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:26:23.115422 systemd[1]: var-lib-kubelet-pods-05bc4a58\x2dc82c\x2d497c\x2daf8e\x2da3f1b1db42a1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d77fc5.mount: Deactivated successfully. Dec 13 14:26:23.206609 kubelet[1560]: I1213 14:26:23.206451 1560 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-77fc5\" (UniqueName: \"kubernetes.io/projected/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-kube-api-access-77fc5\") on node \"10.0.0.92\" DevicePath \"\"" Dec 13 14:26:23.206609 kubelet[1560]: I1213 14:26:23.206504 1560 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-hostproc\") on node \"10.0.0.92\" DevicePath \"\"" Dec 13 14:26:23.206609 kubelet[1560]: I1213 14:26:23.206514 1560 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-clustermesh-secrets\") on node \"10.0.0.92\" DevicePath \"\"" Dec 13 14:26:23.206609 kubelet[1560]: I1213 14:26:23.206522 1560 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-bpf-maps\") on node \"10.0.0.92\" DevicePath \"\"" Dec 13 14:26:23.206609 kubelet[1560]: I1213 14:26:23.206532 1560 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-cilium-run\") on node \"10.0.0.92\" DevicePath \"\"" Dec 13 14:26:23.206609 kubelet[1560]: I1213 14:26:23.206540 1560 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-hubble-tls\") on node \"10.0.0.92\" DevicePath \"\"" Dec 13 14:26:23.206609 kubelet[1560]: I1213 14:26:23.206547 1560 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-cni-path\") on node \"10.0.0.92\" DevicePath \"\"" Dec 13 14:26:23.206609 kubelet[1560]: I1213 14:26:23.206555 1560 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-cilium-config-path\") on node \"10.0.0.92\" DevicePath \"\"" Dec 13 14:26:23.207037 kubelet[1560]: I1213 14:26:23.206563 1560 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-cilium-cgroup\") on node \"10.0.0.92\" DevicePath \"\"" Dec 13 14:26:23.207037 kubelet[1560]: I1213 14:26:23.206573 1560 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-lib-modules\") on node \"10.0.0.92\" DevicePath \"\"" Dec 13 14:26:23.207037 kubelet[1560]: I1213 14:26:23.206582 1560 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-etc-cni-netd\") on node \"10.0.0.92\" DevicePath \"\"" Dec 13 14:26:23.207037 kubelet[1560]: I1213 14:26:23.206589 1560 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-xtables-lock\") on node \"10.0.0.92\" DevicePath \"\"" Dec 13 14:26:23.207037 kubelet[1560]: I1213 14:26:23.206597 1560 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-cilium-ipsec-secrets\") on node \"10.0.0.92\" DevicePath \"\"" Dec 13 14:26:23.207037 kubelet[1560]: I1213 14:26:23.206606 1560 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/05bc4a58-c82c-497c-af8e-a3f1b1db42a1-host-proc-sys-net\") on node \"10.0.0.92\" DevicePath \"\"" Dec 13 14:26:23.648944 kubelet[1560]: E1213 14:26:23.648867 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:23.924641 kubelet[1560]: I1213 14:26:23.924506 1560 setters.go:568] "Node became not ready" node="10.0.0.92" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T14:26:23Z","lastTransitionTime":"2024-12-13T14:26:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 14:26:24.056410 kubelet[1560]: I1213 14:26:24.056371 1560 scope.go:117] "RemoveContainer" containerID="c4eafd2e0d57333ddba867b168e89ea44fdd33f6ce51546593ffae8d78ec9199" Dec 13 14:26:24.057592 env[1315]: time="2024-12-13T14:26:24.057547433Z" level=info msg="RemoveContainer for \"c4eafd2e0d57333ddba867b168e89ea44fdd33f6ce51546593ffae8d78ec9199\"" Dec 13 14:26:24.061246 env[1315]: time="2024-12-13T14:26:24.061188639Z" level=info msg="RemoveContainer for \"c4eafd2e0d57333ddba867b168e89ea44fdd33f6ce51546593ffae8d78ec9199\" returns successfully" Dec 13 14:26:24.087259 kubelet[1560]: I1213 14:26:24.087212 1560 topology_manager.go:215] "Topology Admit Handler" podUID="77058686-ff80-466e-8ccb-e12ff894bc33" podNamespace="kube-system" podName="cilium-fwb78" Dec 13 14:26:24.087259 kubelet[1560]: E1213 14:26:24.087269 1560 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="05bc4a58-c82c-497c-af8e-a3f1b1db42a1" containerName="mount-cgroup" Dec 13 14:26:24.087506 kubelet[1560]: I1213 14:26:24.087290 1560 memory_manager.go:354] "RemoveStaleState removing state" podUID="05bc4a58-c82c-497c-af8e-a3f1b1db42a1" containerName="mount-cgroup" Dec 13 14:26:24.112737 kubelet[1560]: I1213 14:26:24.112682 1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/77058686-ff80-466e-8ccb-e12ff894bc33-cni-path\") pod \"cilium-fwb78\" (UID: \"77058686-ff80-466e-8ccb-e12ff894bc33\") " pod="kube-system/cilium-fwb78" Dec 13 14:26:24.112910 kubelet[1560]: I1213 14:26:24.112762 1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/77058686-ff80-466e-8ccb-e12ff894bc33-etc-cni-netd\") pod \"cilium-fwb78\" (UID: \"77058686-ff80-466e-8ccb-e12ff894bc33\") " pod="kube-system/cilium-fwb78" Dec 13 14:26:24.112910 kubelet[1560]: I1213 14:26:24.112790 1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/77058686-ff80-466e-8ccb-e12ff894bc33-cilium-ipsec-secrets\") pod \"cilium-fwb78\" (UID: \"77058686-ff80-466e-8ccb-e12ff894bc33\") " pod="kube-system/cilium-fwb78" Dec 13 14:26:24.112910 kubelet[1560]: I1213 14:26:24.112895 1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/77058686-ff80-466e-8ccb-e12ff894bc33-bpf-maps\") pod \"cilium-fwb78\" (UID: \"77058686-ff80-466e-8ccb-e12ff894bc33\") " pod="kube-system/cilium-fwb78" Dec 13 14:26:24.112992 kubelet[1560]: I1213 14:26:24.112951 1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/77058686-ff80-466e-8ccb-e12ff894bc33-hostproc\") pod \"cilium-fwb78\" (UID: \"77058686-ff80-466e-8ccb-e12ff894bc33\") " pod="kube-system/cilium-fwb78" Dec 13 14:26:24.113085 kubelet[1560]: I1213 14:26:24.113038 1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/77058686-ff80-466e-8ccb-e12ff894bc33-host-proc-sys-kernel\") pod \"cilium-fwb78\" (UID: \"77058686-ff80-466e-8ccb-e12ff894bc33\") " pod="kube-system/cilium-fwb78" Dec 13 14:26:24.113120 kubelet[1560]: I1213 14:26:24.113108 1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/77058686-ff80-466e-8ccb-e12ff894bc33-clustermesh-secrets\") pod \"cilium-fwb78\" (UID: \"77058686-ff80-466e-8ccb-e12ff894bc33\") " pod="kube-system/cilium-fwb78" Dec 13 14:26:24.113148 kubelet[1560]: I1213 14:26:24.113136 1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/77058686-ff80-466e-8ccb-e12ff894bc33-cilium-run\") pod \"cilium-fwb78\" (UID: \"77058686-ff80-466e-8ccb-e12ff894bc33\") " pod="kube-system/cilium-fwb78" Dec 13 14:26:24.113195 kubelet[1560]: I1213 14:26:24.113181 1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/77058686-ff80-466e-8ccb-e12ff894bc33-cilium-config-path\") pod \"cilium-fwb78\" (UID: \"77058686-ff80-466e-8ccb-e12ff894bc33\") " pod="kube-system/cilium-fwb78" Dec 13 14:26:24.113234 kubelet[1560]: I1213 14:26:24.113208 1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/77058686-ff80-466e-8ccb-e12ff894bc33-xtables-lock\") pod \"cilium-fwb78\" (UID: \"77058686-ff80-466e-8ccb-e12ff894bc33\") " pod="kube-system/cilium-fwb78" Dec 13 14:26:24.113234 kubelet[1560]: I1213 14:26:24.113235 1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/77058686-ff80-466e-8ccb-e12ff894bc33-host-proc-sys-net\") pod \"cilium-fwb78\" (UID: \"77058686-ff80-466e-8ccb-e12ff894bc33\") " pod="kube-system/cilium-fwb78" Dec 13 14:26:24.113305 kubelet[1560]: I1213 14:26:24.113281 1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8jk7\" (UniqueName: \"kubernetes.io/projected/77058686-ff80-466e-8ccb-e12ff894bc33-kube-api-access-v8jk7\") pod \"cilium-fwb78\" (UID: \"77058686-ff80-466e-8ccb-e12ff894bc33\") " pod="kube-system/cilium-fwb78" Dec 13 14:26:24.113371 kubelet[1560]: I1213 14:26:24.113351 1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/77058686-ff80-466e-8ccb-e12ff894bc33-cilium-cgroup\") pod \"cilium-fwb78\" (UID: \"77058686-ff80-466e-8ccb-e12ff894bc33\") " pod="kube-system/cilium-fwb78" Dec 13 14:26:24.113422 kubelet[1560]: I1213 14:26:24.113403 1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/77058686-ff80-466e-8ccb-e12ff894bc33-lib-modules\") pod \"cilium-fwb78\" (UID: \"77058686-ff80-466e-8ccb-e12ff894bc33\") " pod="kube-system/cilium-fwb78" Dec 13 14:26:24.113491 kubelet[1560]: I1213 14:26:24.113466 1560 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/77058686-ff80-466e-8ccb-e12ff894bc33-hubble-tls\") pod \"cilium-fwb78\" (UID: \"77058686-ff80-466e-8ccb-e12ff894bc33\") " pod="kube-system/cilium-fwb78" Dec 13 14:26:24.390920 kubelet[1560]: E1213 14:26:24.390873 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:26:24.391499 env[1315]: time="2024-12-13T14:26:24.391450855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fwb78,Uid:77058686-ff80-466e-8ccb-e12ff894bc33,Namespace:kube-system,Attempt:0,}" Dec 13 14:26:24.409124 env[1315]: time="2024-12-13T14:26:24.409037917Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:26:24.409124 env[1315]: time="2024-12-13T14:26:24.409074346Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:26:24.409124 env[1315]: time="2024-12-13T14:26:24.409084125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:26:24.409349 env[1315]: time="2024-12-13T14:26:24.409199483Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6ceb87fac893f9a039b1efae2462cbcc19ce6939fcf88fb67218950576adfe29 pid=3302 runtime=io.containerd.runc.v2 Dec 13 14:26:24.439099 env[1315]: time="2024-12-13T14:26:24.438681242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fwb78,Uid:77058686-ff80-466e-8ccb-e12ff894bc33,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ceb87fac893f9a039b1efae2462cbcc19ce6939fcf88fb67218950576adfe29\"" Dec 13 14:26:24.439503 kubelet[1560]: E1213 14:26:24.439471 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:26:24.441155 env[1315]: time="2024-12-13T14:26:24.441128864Z" level=info msg="CreateContainer within sandbox \"6ceb87fac893f9a039b1efae2462cbcc19ce6939fcf88fb67218950576adfe29\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:26:24.454735 env[1315]: time="2024-12-13T14:26:24.454665795Z" level=info msg="CreateContainer within sandbox \"6ceb87fac893f9a039b1efae2462cbcc19ce6939fcf88fb67218950576adfe29\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e1a3b5acbc5a3755b40db2260c6aa506973e8c371c60807cef9bc3d9b217f93d\"" Dec 13 14:26:24.455462 env[1315]: time="2024-12-13T14:26:24.455436247Z" level=info msg="StartContainer for \"e1a3b5acbc5a3755b40db2260c6aa506973e8c371c60807cef9bc3d9b217f93d\"" Dec 13 14:26:24.496885 env[1315]: time="2024-12-13T14:26:24.496552810Z" level=info msg="StartContainer for \"e1a3b5acbc5a3755b40db2260c6aa506973e8c371c60807cef9bc3d9b217f93d\" returns successfully" Dec 13 14:26:24.525221 env[1315]: time="2024-12-13T14:26:24.525154019Z" level=info msg="shim disconnected" id=e1a3b5acbc5a3755b40db2260c6aa506973e8c371c60807cef9bc3d9b217f93d Dec 13 14:26:24.525221 env[1315]: time="2024-12-13T14:26:24.525225654Z" level=warning msg="cleaning up after shim disconnected" id=e1a3b5acbc5a3755b40db2260c6aa506973e8c371c60807cef9bc3d9b217f93d namespace=k8s.io Dec 13 14:26:24.525511 env[1315]: time="2024-12-13T14:26:24.525238359Z" level=info msg="cleaning up dead shim" Dec 13 14:26:24.531708 env[1315]: time="2024-12-13T14:26:24.531674744Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:26:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3385 runtime=io.containerd.runc.v2\n" Dec 13 14:26:24.649559 kubelet[1560]: E1213 14:26:24.649386 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:24.921278 kubelet[1560]: I1213 14:26:24.921173 1560 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="05bc4a58-c82c-497c-af8e-a3f1b1db42a1" path="/var/lib/kubelet/pods/05bc4a58-c82c-497c-af8e-a3f1b1db42a1/volumes" Dec 13 14:26:25.060305 kubelet[1560]: E1213 14:26:25.060274 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:26:25.062043 env[1315]: time="2024-12-13T14:26:25.061938407Z" level=info msg="CreateContainer within sandbox \"6ceb87fac893f9a039b1efae2462cbcc19ce6939fcf88fb67218950576adfe29\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:26:25.076033 env[1315]: time="2024-12-13T14:26:25.075975369Z" level=info msg="CreateContainer within sandbox \"6ceb87fac893f9a039b1efae2462cbcc19ce6939fcf88fb67218950576adfe29\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"db4208d52a34ee2afb5f5f5336b1e04648e35c8fa0852f8033befa9abb172565\"" Dec 13 14:26:25.076435 env[1315]: time="2024-12-13T14:26:25.076397780Z" level=info msg="StartContainer for \"db4208d52a34ee2afb5f5f5336b1e04648e35c8fa0852f8033befa9abb172565\"" Dec 13 14:26:25.118038 env[1315]: time="2024-12-13T14:26:25.117995331Z" level=info msg="StartContainer for \"db4208d52a34ee2afb5f5f5336b1e04648e35c8fa0852f8033befa9abb172565\" returns successfully" Dec 13 14:26:25.650521 kubelet[1560]: E1213 14:26:25.650410 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:25.653964 env[1315]: time="2024-12-13T14:26:25.653911799Z" level=info msg="shim disconnected" id=db4208d52a34ee2afb5f5f5336b1e04648e35c8fa0852f8033befa9abb172565 Dec 13 14:26:25.654095 env[1315]: time="2024-12-13T14:26:25.653968336Z" level=warning msg="cleaning up after shim disconnected" id=db4208d52a34ee2afb5f5f5336b1e04648e35c8fa0852f8033befa9abb172565 namespace=k8s.io Dec 13 14:26:25.654095 env[1315]: time="2024-12-13T14:26:25.653977783Z" level=info msg="cleaning up dead shim" Dec 13 14:26:25.661147 env[1315]: time="2024-12-13T14:26:25.661078626Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:26:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3448 runtime=io.containerd.runc.v2\n" Dec 13 14:26:25.880627 env[1315]: time="2024-12-13T14:26:25.880568238Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:25.882612 env[1315]: time="2024-12-13T14:26:25.882575974Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:25.885398 env[1315]: time="2024-12-13T14:26:25.885346256Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:26:25.885671 env[1315]: time="2024-12-13T14:26:25.885634292Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 14:26:25.887733 env[1315]: time="2024-12-13T14:26:25.887665041Z" level=info msg="CreateContainer within sandbox \"0dc7e7c8ab15a1cf529760f2679754f95532c793e46ccc3775957e3f432767de\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 14:26:25.899305 env[1315]: time="2024-12-13T14:26:25.899260645Z" level=info msg="CreateContainer within sandbox \"0dc7e7c8ab15a1cf529760f2679754f95532c793e46ccc3775957e3f432767de\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"cb6f8da87dd67486aab3bec33a9725d37a46e3bd15eb2bea406089fe48e26b79\"" Dec 13 14:26:25.899936 env[1315]: time="2024-12-13T14:26:25.899904796Z" level=info msg="StartContainer for \"cb6f8da87dd67486aab3bec33a9725d37a46e3bd15eb2bea406089fe48e26b79\"" Dec 13 14:26:25.959739 env[1315]: time="2024-12-13T14:26:25.958907415Z" level=info msg="StartContainer for \"cb6f8da87dd67486aab3bec33a9725d37a46e3bd15eb2bea406089fe48e26b79\" returns successfully" Dec 13 14:26:26.068537 kubelet[1560]: E1213 14:26:26.068488 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:26:26.070610 kubelet[1560]: E1213 14:26:26.070585 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:26:26.073917 env[1315]: time="2024-12-13T14:26:26.073847079Z" level=info msg="CreateContainer within sandbox \"6ceb87fac893f9a039b1efae2462cbcc19ce6939fcf88fb67218950576adfe29\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:26:26.087544 kubelet[1560]: I1213 14:26:26.087479 1560 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-8n76k" podStartSLOduration=1.525321981 podStartE2EDuration="5.087416858s" podCreationTimestamp="2024-12-13 14:26:21 +0000 UTC" firstStartedPulling="2024-12-13 14:26:22.32386394 +0000 UTC m=+60.375810158" lastFinishedPulling="2024-12-13 14:26:25.885958807 +0000 UTC m=+63.937905035" observedRunningTime="2024-12-13 14:26:26.087151574 +0000 UTC m=+64.139097822" watchObservedRunningTime="2024-12-13 14:26:26.087416858 +0000 UTC m=+64.139363086" Dec 13 14:26:26.140267 env[1315]: time="2024-12-13T14:26:26.140180850Z" level=info msg="CreateContainer within sandbox \"6ceb87fac893f9a039b1efae2462cbcc19ce6939fcf88fb67218950576adfe29\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d5518a3cc24d795b7cbec86b0da8ad63b62a3b3f6eeed64345671d2d9893f7a8\"" Dec 13 14:26:26.141099 env[1315]: time="2024-12-13T14:26:26.141042453Z" level=info msg="StartContainer for \"d5518a3cc24d795b7cbec86b0da8ad63b62a3b3f6eeed64345671d2d9893f7a8\"" Dec 13 14:26:26.221282 systemd[1]: run-containerd-runc-k8s.io-cb6f8da87dd67486aab3bec33a9725d37a46e3bd15eb2bea406089fe48e26b79-runc.GsIgvG.mount: Deactivated successfully. Dec 13 14:26:26.279336 env[1315]: time="2024-12-13T14:26:26.279273218Z" level=info msg="StartContainer for \"d5518a3cc24d795b7cbec86b0da8ad63b62a3b3f6eeed64345671d2d9893f7a8\" returns successfully" Dec 13 14:26:26.302602 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d5518a3cc24d795b7cbec86b0da8ad63b62a3b3f6eeed64345671d2d9893f7a8-rootfs.mount: Deactivated successfully. Dec 13 14:26:26.407907 env[1315]: time="2024-12-13T14:26:26.407850688Z" level=info msg="shim disconnected" id=d5518a3cc24d795b7cbec86b0da8ad63b62a3b3f6eeed64345671d2d9893f7a8 Dec 13 14:26:26.407907 env[1315]: time="2024-12-13T14:26:26.407904590Z" level=warning msg="cleaning up after shim disconnected" id=d5518a3cc24d795b7cbec86b0da8ad63b62a3b3f6eeed64345671d2d9893f7a8 namespace=k8s.io Dec 13 14:26:26.407907 env[1315]: time="2024-12-13T14:26:26.407918135Z" level=info msg="cleaning up dead shim" Dec 13 14:26:26.414485 env[1315]: time="2024-12-13T14:26:26.414443472Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:26:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3544 runtime=io.containerd.runc.v2\n" Dec 13 14:26:26.651017 kubelet[1560]: E1213 14:26:26.650937 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:27.074044 kubelet[1560]: E1213 14:26:27.073787 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:26:27.074044 kubelet[1560]: E1213 14:26:27.073787 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:26:27.075596 env[1315]: time="2024-12-13T14:26:27.075551167Z" level=info msg="CreateContainer within sandbox \"6ceb87fac893f9a039b1efae2462cbcc19ce6939fcf88fb67218950576adfe29\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:26:27.089283 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2887243999.mount: Deactivated successfully. Dec 13 14:26:27.092848 env[1315]: time="2024-12-13T14:26:27.092804938Z" level=info msg="CreateContainer within sandbox \"6ceb87fac893f9a039b1efae2462cbcc19ce6939fcf88fb67218950576adfe29\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"79180c0c40e83b4d0841152470b92ab3afad5530673883f48b958663f2a90ef3\"" Dec 13 14:26:27.093866 env[1315]: time="2024-12-13T14:26:27.093826784Z" level=info msg="StartContainer for \"79180c0c40e83b4d0841152470b92ab3afad5530673883f48b958663f2a90ef3\"" Dec 13 14:26:27.135886 env[1315]: time="2024-12-13T14:26:27.135817817Z" level=info msg="StartContainer for \"79180c0c40e83b4d0841152470b92ab3afad5530673883f48b958663f2a90ef3\" returns successfully" Dec 13 14:26:27.153839 env[1315]: time="2024-12-13T14:26:27.153764991Z" level=info msg="shim disconnected" id=79180c0c40e83b4d0841152470b92ab3afad5530673883f48b958663f2a90ef3 Dec 13 14:26:27.153839 env[1315]: time="2024-12-13T14:26:27.153823263Z" level=warning msg="cleaning up after shim disconnected" id=79180c0c40e83b4d0841152470b92ab3afad5530673883f48b958663f2a90ef3 namespace=k8s.io Dec 13 14:26:27.153839 env[1315]: time="2024-12-13T14:26:27.153835666Z" level=info msg="cleaning up dead shim" Dec 13 14:26:27.160260 env[1315]: time="2024-12-13T14:26:27.160215765Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:26:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3599 runtime=io.containerd.runc.v2\n" Dec 13 14:26:27.651804 kubelet[1560]: E1213 14:26:27.651741 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:27.898350 kubelet[1560]: E1213 14:26:27.898296 1560 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:26:28.077461 kubelet[1560]: E1213 14:26:28.077342 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:26:28.079226 env[1315]: time="2024-12-13T14:26:28.079169129Z" level=info msg="CreateContainer within sandbox \"6ceb87fac893f9a039b1efae2462cbcc19ce6939fcf88fb67218950576adfe29\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:26:28.092114 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2266937721.mount: Deactivated successfully. Dec 13 14:26:28.093328 env[1315]: time="2024-12-13T14:26:28.093271522Z" level=info msg="CreateContainer within sandbox \"6ceb87fac893f9a039b1efae2462cbcc19ce6939fcf88fb67218950576adfe29\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"95058b68fef967dea3d9b846e4d27dec565adb8e2fee7ff307f479749d1486b1\"" Dec 13 14:26:28.093706 env[1315]: time="2024-12-13T14:26:28.093684846Z" level=info msg="StartContainer for \"95058b68fef967dea3d9b846e4d27dec565adb8e2fee7ff307f479749d1486b1\"" Dec 13 14:26:28.135817 env[1315]: time="2024-12-13T14:26:28.135765729Z" level=info msg="StartContainer for \"95058b68fef967dea3d9b846e4d27dec565adb8e2fee7ff307f479749d1486b1\" returns successfully" Dec 13 14:26:28.652808 kubelet[1560]: E1213 14:26:28.652761 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:28.754796 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 14:26:29.083317 kubelet[1560]: E1213 14:26:29.083183 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:26:29.174936 kubelet[1560]: I1213 14:26:29.174886 1560 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-fwb78" podStartSLOduration=5.17484568 podStartE2EDuration="5.17484568s" podCreationTimestamp="2024-12-13 14:26:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:26:29.174611796 +0000 UTC m=+67.226558024" watchObservedRunningTime="2024-12-13 14:26:29.17484568 +0000 UTC m=+67.226791908" Dec 13 14:26:29.653394 kubelet[1560]: E1213 14:26:29.653293 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:30.392465 kubelet[1560]: E1213 14:26:30.392432 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:26:30.654315 kubelet[1560]: E1213 14:26:30.654107 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:31.654930 kubelet[1560]: E1213 14:26:31.654876 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:31.682172 systemd-networkd[1087]: lxc_health: Link UP Dec 13 14:26:31.689754 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:26:31.690015 systemd-networkd[1087]: lxc_health: Gained carrier Dec 13 14:26:32.393482 kubelet[1560]: E1213 14:26:32.393437 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:26:32.612975 kubelet[1560]: E1213 14:26:32.612933 1560 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:40996->127.0.0.1:46563: write tcp 127.0.0.1:40996->127.0.0.1:46563: write: broken pipe Dec 13 14:26:32.655444 kubelet[1560]: E1213 14:26:32.655265 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:33.090592 kubelet[1560]: E1213 14:26:33.090460 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:26:33.307130 systemd-networkd[1087]: lxc_health: Gained IPv6LL Dec 13 14:26:33.656152 kubelet[1560]: E1213 14:26:33.656072 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:34.092230 kubelet[1560]: E1213 14:26:34.092106 1560 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:26:34.660014 kubelet[1560]: E1213 14:26:34.656638 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:34.703997 kubelet[1560]: E1213 14:26:34.703960 1560 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:41004->127.0.0.1:46563: write tcp 127.0.0.1:41004->127.0.0.1:46563: write: broken pipe Dec 13 14:26:35.657192 kubelet[1560]: E1213 14:26:35.657115 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:36.658304 kubelet[1560]: E1213 14:26:36.658191 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:37.659168 kubelet[1560]: E1213 14:26:37.659079 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:26:38.659329 kubelet[1560]: E1213 14:26:38.659248 1560 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"