Dec 13 14:18:21.907343 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Dec 13 12:55:10 -00 2024 Dec 13 14:18:21.907370 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:18:21.907383 kernel: BIOS-provided physical RAM map: Dec 13 14:18:21.907391 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 13 14:18:21.907399 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Dec 13 14:18:21.907407 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Dec 13 14:18:21.907416 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Dec 13 14:18:21.907425 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Dec 13 14:18:21.907432 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Dec 13 14:18:21.907443 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Dec 13 14:18:21.907451 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Dec 13 14:18:21.907459 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Dec 13 14:18:21.907467 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Dec 13 14:18:21.907475 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Dec 13 14:18:21.907485 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Dec 13 14:18:21.907496 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Dec 13 14:18:21.907504 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Dec 13 14:18:21.907513 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 14:18:21.907521 kernel: NX (Execute Disable) protection: active Dec 13 14:18:21.907530 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable Dec 13 14:18:21.907539 kernel: e820: update [mem 0x9b475018-0x9b47ec57] usable ==> usable Dec 13 14:18:21.907551 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable Dec 13 14:18:21.907560 kernel: e820: update [mem 0x9b438018-0x9b474e57] usable ==> usable Dec 13 14:18:21.907568 kernel: extended physical RAM map: Dec 13 14:18:21.907577 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 13 14:18:21.907587 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Dec 13 14:18:21.907596 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Dec 13 14:18:21.907605 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Dec 13 14:18:21.907652 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Dec 13 14:18:21.907661 kernel: reserve setup_data: [mem 0x000000000080c000-0x000000000080ffff] usable Dec 13 14:18:21.907670 kernel: reserve setup_data: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Dec 13 14:18:21.907678 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b438017] usable Dec 13 14:18:21.907687 kernel: reserve setup_data: [mem 0x000000009b438018-0x000000009b474e57] usable Dec 13 14:18:21.907695 kernel: reserve setup_data: [mem 0x000000009b474e58-0x000000009b475017] usable Dec 13 14:18:21.907704 kernel: reserve setup_data: [mem 0x000000009b475018-0x000000009b47ec57] usable Dec 13 14:18:21.907712 kernel: reserve setup_data: [mem 0x000000009b47ec58-0x000000009c8eefff] usable Dec 13 14:18:21.907723 kernel: reserve setup_data: [mem 0x000000009c8ef000-0x000000009cb6efff] reserved Dec 13 14:18:21.907732 kernel: reserve setup_data: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Dec 13 14:18:21.907740 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Dec 13 14:18:21.907749 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Dec 13 14:18:21.907762 kernel: reserve setup_data: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Dec 13 14:18:21.907771 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Dec 13 14:18:21.907781 kernel: reserve setup_data: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 14:18:21.907791 kernel: efi: EFI v2.70 by EDK II Dec 13 14:18:21.907801 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b673018 RNG=0x9cb73018 Dec 13 14:18:21.907810 kernel: random: crng init done Dec 13 14:18:21.907819 kernel: SMBIOS 2.8 present. Dec 13 14:18:21.907828 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Dec 13 14:18:21.907837 kernel: Hypervisor detected: KVM Dec 13 14:18:21.907846 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 14:18:21.907856 kernel: kvm-clock: cpu 0, msr 6a19a001, primary cpu clock Dec 13 14:18:21.907865 kernel: kvm-clock: using sched offset of 5342839975 cycles Dec 13 14:18:21.907880 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 14:18:21.907889 kernel: tsc: Detected 2794.748 MHz processor Dec 13 14:18:21.907899 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 14:18:21.907908 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 14:18:21.907918 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Dec 13 14:18:21.907927 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 14:18:21.907937 kernel: Using GB pages for direct mapping Dec 13 14:18:21.907946 kernel: Secure boot disabled Dec 13 14:18:21.907956 kernel: ACPI: Early table checksum verification disabled Dec 13 14:18:21.907978 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Dec 13 14:18:21.907987 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Dec 13 14:18:21.907997 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:18:21.908007 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:18:21.908016 kernel: ACPI: FACS 0x000000009CBDD000 000040 Dec 13 14:18:21.908026 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:18:21.908035 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:18:21.908047 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:18:21.908057 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:18:21.908069 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Dec 13 14:18:21.908078 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Dec 13 14:18:21.908087 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Dec 13 14:18:21.908097 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Dec 13 14:18:21.908106 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Dec 13 14:18:21.908115 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Dec 13 14:18:21.908125 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Dec 13 14:18:21.908134 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Dec 13 14:18:21.908143 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Dec 13 14:18:21.908155 kernel: No NUMA configuration found Dec 13 14:18:21.908167 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Dec 13 14:18:21.908177 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Dec 13 14:18:21.908187 kernel: Zone ranges: Dec 13 14:18:21.908196 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 14:18:21.908206 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Dec 13 14:18:21.908215 kernel: Normal empty Dec 13 14:18:21.908224 kernel: Movable zone start for each node Dec 13 14:18:21.908234 kernel: Early memory node ranges Dec 13 14:18:21.908245 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 13 14:18:21.908254 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Dec 13 14:18:21.908277 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Dec 13 14:18:21.908286 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Dec 13 14:18:21.908295 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Dec 13 14:18:21.908305 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Dec 13 14:18:21.908314 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Dec 13 14:18:21.908323 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 14:18:21.908333 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 13 14:18:21.908342 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Dec 13 14:18:21.908354 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 14:18:21.908363 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Dec 13 14:18:21.908372 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Dec 13 14:18:21.908382 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Dec 13 14:18:21.908391 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 14:18:21.908401 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 14:18:21.908410 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 14:18:21.908419 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 14:18:21.908429 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 14:18:21.908440 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 14:18:21.908449 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 14:18:21.908459 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 14:18:21.908471 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 14:18:21.908481 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 14:18:21.908490 kernel: TSC deadline timer available Dec 13 14:18:21.908500 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Dec 13 14:18:21.908512 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 13 14:18:21.908521 kernel: kvm-guest: setup PV sched yield Dec 13 14:18:21.908533 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Dec 13 14:18:21.908542 kernel: Booting paravirtualized kernel on KVM Dec 13 14:18:21.908558 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 14:18:21.908570 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Dec 13 14:18:21.908580 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Dec 13 14:18:21.908590 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Dec 13 14:18:21.908600 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 13 14:18:21.908609 kernel: kvm-guest: setup async PF for cpu 0 Dec 13 14:18:21.908619 kernel: kvm-guest: stealtime: cpu 0, msr 9b21c0c0 Dec 13 14:18:21.908629 kernel: kvm-guest: PV spinlocks enabled Dec 13 14:18:21.908638 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 14:18:21.908648 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Dec 13 14:18:21.908660 kernel: Policy zone: DMA32 Dec 13 14:18:21.908671 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:18:21.908682 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 14:18:21.908691 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 14:18:21.908703 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 14:18:21.908713 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 14:18:21.908724 kernel: Memory: 2397432K/2567000K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47472K init, 4112K bss, 169308K reserved, 0K cma-reserved) Dec 13 14:18:21.908734 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 14:18:21.908743 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 14:18:21.908753 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 14:18:21.908763 kernel: rcu: Hierarchical RCU implementation. Dec 13 14:18:21.908773 kernel: rcu: RCU event tracing is enabled. Dec 13 14:18:21.908784 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 14:18:21.908795 kernel: Rude variant of Tasks RCU enabled. Dec 13 14:18:21.908805 kernel: Tracing variant of Tasks RCU enabled. Dec 13 14:18:21.908815 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 14:18:21.908825 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 14:18:21.908835 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 13 14:18:21.908845 kernel: Console: colour dummy device 80x25 Dec 13 14:18:21.908854 kernel: printk: console [ttyS0] enabled Dec 13 14:18:21.908864 kernel: ACPI: Core revision 20210730 Dec 13 14:18:21.908874 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 14:18:21.908886 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 14:18:21.908896 kernel: x2apic enabled Dec 13 14:18:21.908906 kernel: Switched APIC routing to physical x2apic. Dec 13 14:18:21.908915 kernel: kvm-guest: setup PV IPIs Dec 13 14:18:21.908925 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 14:18:21.908935 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 14:18:21.908945 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Dec 13 14:18:21.908955 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 14:18:21.908975 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 13 14:18:21.908987 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 13 14:18:21.908997 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 14:18:21.909010 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 14:18:21.909019 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 14:18:21.909029 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 14:18:21.909038 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 13 14:18:21.909048 kernel: RETBleed: Mitigation: untrained return thunk Dec 13 14:18:21.909060 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 14:18:21.909070 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Dec 13 14:18:21.909082 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 14:18:21.909091 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 14:18:21.909101 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 14:18:21.909111 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 14:18:21.909121 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 14:18:21.909131 kernel: Freeing SMP alternatives memory: 32K Dec 13 14:18:21.909140 kernel: pid_max: default: 32768 minimum: 301 Dec 13 14:18:21.909150 kernel: LSM: Security Framework initializing Dec 13 14:18:21.909160 kernel: SELinux: Initializing. Dec 13 14:18:21.909172 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 14:18:21.909182 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 14:18:21.909192 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 13 14:18:21.909202 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 13 14:18:21.909211 kernel: ... version: 0 Dec 13 14:18:21.909220 kernel: ... bit width: 48 Dec 13 14:18:21.909230 kernel: ... generic registers: 6 Dec 13 14:18:21.909240 kernel: ... value mask: 0000ffffffffffff Dec 13 14:18:21.909250 kernel: ... max period: 00007fffffffffff Dec 13 14:18:21.909275 kernel: ... fixed-purpose events: 0 Dec 13 14:18:21.909285 kernel: ... event mask: 000000000000003f Dec 13 14:18:21.909294 kernel: signal: max sigframe size: 1776 Dec 13 14:18:21.909304 kernel: rcu: Hierarchical SRCU implementation. Dec 13 14:18:21.909314 kernel: smp: Bringing up secondary CPUs ... Dec 13 14:18:21.909324 kernel: x86: Booting SMP configuration: Dec 13 14:18:21.909333 kernel: .... node #0, CPUs: #1 Dec 13 14:18:21.909343 kernel: kvm-clock: cpu 1, msr 6a19a041, secondary cpu clock Dec 13 14:18:21.909353 kernel: kvm-guest: setup async PF for cpu 1 Dec 13 14:18:21.909365 kernel: kvm-guest: stealtime: cpu 1, msr 9b29c0c0 Dec 13 14:18:21.909375 kernel: #2 Dec 13 14:18:21.909385 kernel: kvm-clock: cpu 2, msr 6a19a081, secondary cpu clock Dec 13 14:18:21.909395 kernel: kvm-guest: setup async PF for cpu 2 Dec 13 14:18:21.909404 kernel: kvm-guest: stealtime: cpu 2, msr 9b31c0c0 Dec 13 14:18:21.909414 kernel: #3 Dec 13 14:18:21.909424 kernel: kvm-clock: cpu 3, msr 6a19a0c1, secondary cpu clock Dec 13 14:18:21.909433 kernel: kvm-guest: setup async PF for cpu 3 Dec 13 14:18:21.909443 kernel: kvm-guest: stealtime: cpu 3, msr 9b39c0c0 Dec 13 14:18:21.909455 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 14:18:21.909464 kernel: smpboot: Max logical packages: 1 Dec 13 14:18:21.909474 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Dec 13 14:18:21.909484 kernel: devtmpfs: initialized Dec 13 14:18:21.909497 kernel: x86/mm: Memory block size: 128MB Dec 13 14:18:21.909508 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Dec 13 14:18:21.909518 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Dec 13 14:18:21.909528 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Dec 13 14:18:21.909538 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Dec 13 14:18:21.909549 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Dec 13 14:18:21.909559 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 14:18:21.909569 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 14:18:21.909579 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 14:18:21.909589 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 14:18:21.909598 kernel: audit: initializing netlink subsys (disabled) Dec 13 14:18:21.909608 kernel: audit: type=2000 audit(1734099501.485:1): state=initialized audit_enabled=0 res=1 Dec 13 14:18:21.909618 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 14:18:21.909627 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 14:18:21.909639 kernel: cpuidle: using governor menu Dec 13 14:18:21.909649 kernel: ACPI: bus type PCI registered Dec 13 14:18:21.909659 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 14:18:21.909669 kernel: dca service started, version 1.12.1 Dec 13 14:18:21.909679 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 14:18:21.909688 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Dec 13 14:18:21.909698 kernel: PCI: Using configuration type 1 for base access Dec 13 14:18:21.909708 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 14:18:21.909718 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 14:18:21.909730 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 14:18:21.909739 kernel: ACPI: Added _OSI(Module Device) Dec 13 14:18:21.909749 kernel: ACPI: Added _OSI(Processor Device) Dec 13 14:18:21.909759 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 14:18:21.909768 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 14:18:21.909778 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 14:18:21.909788 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 14:18:21.909798 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 14:18:21.909807 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 14:18:21.909818 kernel: ACPI: Interpreter enabled Dec 13 14:18:21.909828 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 14:18:21.909838 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 14:18:21.909847 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 14:18:21.909857 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 14:18:21.909867 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 14:18:21.910053 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 14:18:21.910154 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 13 14:18:21.910253 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 13 14:18:21.910286 kernel: PCI host bridge to bus 0000:00 Dec 13 14:18:21.910412 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 14:18:21.910501 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 14:18:21.910588 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 14:18:21.910674 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Dec 13 14:18:21.910761 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 14:18:21.910870 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Dec 13 14:18:21.910954 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 14:18:21.911085 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 14:18:21.911191 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Dec 13 14:18:21.911309 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Dec 13 14:18:21.911403 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Dec 13 14:18:21.911496 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Dec 13 14:18:21.911606 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Dec 13 14:18:21.911699 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 14:18:21.911809 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 14:18:21.911905 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Dec 13 14:18:21.912009 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Dec 13 14:18:21.912101 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Dec 13 14:18:21.912210 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Dec 13 14:18:21.912331 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Dec 13 14:18:21.912424 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Dec 13 14:18:21.912515 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Dec 13 14:18:21.912622 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 14:18:21.912716 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Dec 13 14:18:21.912807 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Dec 13 14:18:21.912902 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Dec 13 14:18:21.913003 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Dec 13 14:18:21.913115 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 14:18:21.913209 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 14:18:21.913379 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 14:18:21.913474 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Dec 13 14:18:21.913565 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Dec 13 14:18:21.913682 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 14:18:21.913798 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Dec 13 14:18:21.913811 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 14:18:21.913821 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 14:18:21.913830 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 14:18:21.913839 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 14:18:21.913849 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 14:18:21.913858 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 14:18:21.913870 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 14:18:21.913879 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 14:18:21.913888 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 14:18:21.913897 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 14:18:21.913906 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 14:18:21.913914 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 14:18:21.913923 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 14:18:21.913932 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 14:18:21.913941 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 14:18:21.913951 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 14:18:21.913960 kernel: iommu: Default domain type: Translated Dec 13 14:18:21.913980 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 14:18:21.914074 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 14:18:21.914166 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 14:18:21.914272 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 14:18:21.914286 kernel: vgaarb: loaded Dec 13 14:18:21.914296 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 14:18:21.914305 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 14:18:21.914318 kernel: PTP clock support registered Dec 13 14:18:21.914327 kernel: Registered efivars operations Dec 13 14:18:21.914337 kernel: PCI: Using ACPI for IRQ routing Dec 13 14:18:21.914346 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 14:18:21.914355 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Dec 13 14:18:21.914364 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Dec 13 14:18:21.914374 kernel: e820: reserve RAM buffer [mem 0x9b438018-0x9bffffff] Dec 13 14:18:21.914383 kernel: e820: reserve RAM buffer [mem 0x9b475018-0x9bffffff] Dec 13 14:18:21.914392 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Dec 13 14:18:21.914403 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Dec 13 14:18:21.914412 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 14:18:21.914422 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 14:18:21.914431 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 14:18:21.914440 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 14:18:21.914450 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 14:18:21.914459 kernel: pnp: PnP ACPI init Dec 13 14:18:21.914577 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 14:18:21.914594 kernel: pnp: PnP ACPI: found 6 devices Dec 13 14:18:21.914603 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 14:18:21.914613 kernel: NET: Registered PF_INET protocol family Dec 13 14:18:21.914623 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 14:18:21.914632 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 14:18:21.914642 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 14:18:21.914651 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 14:18:21.914661 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Dec 13 14:18:21.914672 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 14:18:21.914681 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 14:18:21.914691 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 14:18:21.914700 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 14:18:21.914709 kernel: NET: Registered PF_XDP protocol family Dec 13 14:18:21.914805 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Dec 13 14:18:21.914898 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Dec 13 14:18:21.914993 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 14:18:21.915083 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 14:18:21.915182 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 14:18:21.915269 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Dec 13 14:18:21.915376 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 14:18:21.915450 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Dec 13 14:18:21.915460 kernel: PCI: CLS 0 bytes, default 64 Dec 13 14:18:21.915469 kernel: Initialise system trusted keyrings Dec 13 14:18:21.915477 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 14:18:21.915485 kernel: Key type asymmetric registered Dec 13 14:18:21.915497 kernel: Asymmetric key parser 'x509' registered Dec 13 14:18:21.915505 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 14:18:21.915524 kernel: io scheduler mq-deadline registered Dec 13 14:18:21.915534 kernel: io scheduler kyber registered Dec 13 14:18:21.915543 kernel: io scheduler bfq registered Dec 13 14:18:21.915551 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 14:18:21.915561 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 14:18:21.915569 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 14:18:21.915578 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 14:18:21.915588 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 14:18:21.915597 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 14:18:21.915606 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 14:18:21.915614 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 14:18:21.915623 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 14:18:21.915632 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 14:18:21.915727 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 14:18:21.915806 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 14:18:21.915885 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T14:18:21 UTC (1734099501) Dec 13 14:18:21.915960 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 13 14:18:21.915981 kernel: efifb: probing for efifb Dec 13 14:18:21.915990 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Dec 13 14:18:21.915999 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Dec 13 14:18:21.916007 kernel: efifb: scrolling: redraw Dec 13 14:18:21.916016 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 13 14:18:21.916024 kernel: Console: switching to colour frame buffer device 160x50 Dec 13 14:18:21.916036 kernel: fb0: EFI VGA frame buffer device Dec 13 14:18:21.916046 kernel: pstore: Registered efi as persistent store backend Dec 13 14:18:21.916054 kernel: NET: Registered PF_INET6 protocol family Dec 13 14:18:21.916063 kernel: Segment Routing with IPv6 Dec 13 14:18:21.916073 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 14:18:21.916082 kernel: NET: Registered PF_PACKET protocol family Dec 13 14:18:21.916092 kernel: Key type dns_resolver registered Dec 13 14:18:21.916100 kernel: IPI shorthand broadcast: enabled Dec 13 14:18:21.916109 kernel: sched_clock: Marking stable (494001980, 128748320)->(641225277, -18474977) Dec 13 14:18:21.916118 kernel: registered taskstats version 1 Dec 13 14:18:21.916126 kernel: Loading compiled-in X.509 certificates Dec 13 14:18:21.916135 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e1d88c9e01f5bb2adeb5b99325e46e5ca8dff115' Dec 13 14:18:21.916143 kernel: Key type .fscrypt registered Dec 13 14:18:21.916152 kernel: Key type fscrypt-provisioning registered Dec 13 14:18:21.916160 kernel: pstore: Using crash dump compression: deflate Dec 13 14:18:21.916171 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 14:18:21.916179 kernel: ima: Allocated hash algorithm: sha1 Dec 13 14:18:21.916188 kernel: ima: No architecture policies found Dec 13 14:18:21.916196 kernel: clk: Disabling unused clocks Dec 13 14:18:21.916205 kernel: Freeing unused kernel image (initmem) memory: 47472K Dec 13 14:18:21.916214 kernel: Write protecting the kernel read-only data: 28672k Dec 13 14:18:21.916222 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 14:18:21.916231 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 14:18:21.916240 kernel: Run /init as init process Dec 13 14:18:21.916249 kernel: with arguments: Dec 13 14:18:21.916303 kernel: /init Dec 13 14:18:21.916312 kernel: with environment: Dec 13 14:18:21.916321 kernel: HOME=/ Dec 13 14:18:21.916329 kernel: TERM=linux Dec 13 14:18:21.916338 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 14:18:21.916348 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:18:21.916360 systemd[1]: Detected virtualization kvm. Dec 13 14:18:21.916371 systemd[1]: Detected architecture x86-64. Dec 13 14:18:21.916380 systemd[1]: Running in initrd. Dec 13 14:18:21.916389 systemd[1]: No hostname configured, using default hostname. Dec 13 14:18:21.916398 systemd[1]: Hostname set to . Dec 13 14:18:21.916408 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:18:21.916417 systemd[1]: Queued start job for default target initrd.target. Dec 13 14:18:21.916426 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:18:21.916435 systemd[1]: Reached target cryptsetup.target. Dec 13 14:18:21.916445 systemd[1]: Reached target paths.target. Dec 13 14:18:21.916454 systemd[1]: Reached target slices.target. Dec 13 14:18:21.916463 systemd[1]: Reached target swap.target. Dec 13 14:18:21.916472 systemd[1]: Reached target timers.target. Dec 13 14:18:21.916482 systemd[1]: Listening on iscsid.socket. Dec 13 14:18:21.916491 systemd[1]: Listening on iscsiuio.socket. Dec 13 14:18:21.916500 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:18:21.916509 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:18:21.916520 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:18:21.916529 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:18:21.916538 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:18:21.916547 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:18:21.916557 systemd[1]: Reached target sockets.target. Dec 13 14:18:21.916566 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:18:21.916575 systemd[1]: Finished network-cleanup.service. Dec 13 14:18:21.916584 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 14:18:21.916593 systemd[1]: Starting systemd-journald.service... Dec 13 14:18:21.916604 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:18:21.916613 systemd[1]: Starting systemd-resolved.service... Dec 13 14:18:21.916622 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 14:18:21.916631 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:18:21.916641 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 14:18:21.916650 kernel: audit: type=1130 audit(1734099501.907:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:21.916659 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 14:18:21.916669 kernel: audit: type=1130 audit(1734099501.913:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:21.916682 systemd-journald[198]: Journal started Dec 13 14:18:21.916726 systemd-journald[198]: Runtime Journal (/run/log/journal/a0ec1259d3184c5e896d10c99879b319) is 6.0M, max 48.4M, 42.4M free. Dec 13 14:18:21.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:21.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:21.918289 systemd[1]: Started systemd-journald.service. Dec 13 14:18:21.919382 systemd-modules-load[199]: Inserted module 'overlay' Dec 13 14:18:21.925591 kernel: audit: type=1130 audit(1734099501.919:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:21.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:21.920348 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 14:18:21.924560 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:18:21.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:21.932744 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:18:21.937287 kernel: audit: type=1130 audit(1734099501.932:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:21.941677 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 14:18:21.946854 kernel: audit: type=1130 audit(1734099501.942:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:21.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:21.943441 systemd[1]: Starting dracut-cmdline.service... Dec 13 14:18:21.954835 systemd-resolved[200]: Positive Trust Anchors: Dec 13 14:18:21.954850 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:18:21.959543 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 14:18:21.959563 dracut-cmdline[217]: dracut-dracut-053 Dec 13 14:18:21.959563 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:18:21.954887 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:18:21.972205 systemd-resolved[200]: Defaulting to hostname 'linux'. Dec 13 14:18:21.973896 kernel: Bridge firewalling registered Dec 13 14:18:21.973937 systemd-modules-load[199]: Inserted module 'br_netfilter' Dec 13 14:18:21.975098 systemd[1]: Started systemd-resolved.service. Dec 13 14:18:21.980453 kernel: audit: type=1130 audit(1734099501.975:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:21.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:21.976144 systemd[1]: Reached target nss-lookup.target. Dec 13 14:18:21.993289 kernel: SCSI subsystem initialized Dec 13 14:18:22.005318 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 14:18:22.005369 kernel: device-mapper: uevent: version 1.0.3 Dec 13 14:18:22.005383 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 14:18:22.007952 systemd-modules-load[199]: Inserted module 'dm_multipath' Dec 13 14:18:22.008575 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:18:22.014589 kernel: audit: type=1130 audit(1734099502.009:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:22.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:22.010673 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:18:22.017921 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:18:22.023080 kernel: Loading iSCSI transport class v2.0-870. Dec 13 14:18:22.023096 kernel: audit: type=1130 audit(1734099502.018:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:22.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:22.034283 kernel: iscsi: registered transport (tcp) Dec 13 14:18:22.055715 kernel: iscsi: registered transport (qla4xxx) Dec 13 14:18:22.055751 kernel: QLogic iSCSI HBA Driver Dec 13 14:18:22.080186 systemd[1]: Finished dracut-cmdline.service. Dec 13 14:18:22.085437 kernel: audit: type=1130 audit(1734099502.080:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:22.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:22.082004 systemd[1]: Starting dracut-pre-udev.service... Dec 13 14:18:22.125301 kernel: raid6: avx2x4 gen() 30991 MB/s Dec 13 14:18:22.142309 kernel: raid6: avx2x4 xor() 7532 MB/s Dec 13 14:18:22.160297 kernel: raid6: avx2x2 gen() 24760 MB/s Dec 13 14:18:22.177301 kernel: raid6: avx2x2 xor() 15261 MB/s Dec 13 14:18:22.194300 kernel: raid6: avx2x1 gen() 7765 MB/s Dec 13 14:18:22.211443 kernel: raid6: avx2x1 xor() 7578 MB/s Dec 13 14:18:22.230662 kernel: raid6: sse2x4 gen() 7486 MB/s Dec 13 14:18:22.247311 kernel: raid6: sse2x4 xor() 4184 MB/s Dec 13 14:18:22.264289 kernel: raid6: sse2x2 gen() 15139 MB/s Dec 13 14:18:22.281291 kernel: raid6: sse2x2 xor() 9414 MB/s Dec 13 14:18:22.298280 kernel: raid6: sse2x1 gen() 12076 MB/s Dec 13 14:18:22.315724 kernel: raid6: sse2x1 xor() 7780 MB/s Dec 13 14:18:22.315746 kernel: raid6: using algorithm avx2x4 gen() 30991 MB/s Dec 13 14:18:22.315755 kernel: raid6: .... xor() 7532 MB/s, rmw enabled Dec 13 14:18:22.316429 kernel: raid6: using avx2x2 recovery algorithm Dec 13 14:18:22.330287 kernel: xor: automatically using best checksumming function avx Dec 13 14:18:22.424294 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 14:18:22.432203 systemd[1]: Finished dracut-pre-udev.service. Dec 13 14:18:22.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:22.433000 audit: BPF prog-id=7 op=LOAD Dec 13 14:18:22.433000 audit: BPF prog-id=8 op=LOAD Dec 13 14:18:22.434405 systemd[1]: Starting systemd-udevd.service... Dec 13 14:18:22.448254 systemd-udevd[400]: Using default interface naming scheme 'v252'. Dec 13 14:18:22.453728 systemd[1]: Started systemd-udevd.service. Dec 13 14:18:22.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:22.454492 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 14:18:22.465195 dracut-pre-trigger[401]: rd.md=0: removing MD RAID activation Dec 13 14:18:22.487242 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 14:18:22.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:22.488792 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:18:22.529732 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:18:22.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:22.573295 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 14:18:22.600472 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 14:18:22.600577 kernel: AES CTR mode by8 optimization enabled Dec 13 14:18:22.601287 kernel: libata version 3.00 loaded. Dec 13 14:18:22.608985 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 14:18:22.618965 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 14:18:22.618999 kernel: GPT:9289727 != 19775487 Dec 13 14:18:22.619013 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 14:18:22.619036 kernel: GPT:9289727 != 19775487 Dec 13 14:18:22.619065 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 14:18:22.619094 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 14:18:22.672968 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:18:22.672992 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 14:18:22.673007 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 14:18:22.673164 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 14:18:22.673311 kernel: scsi host0: ahci Dec 13 14:18:22.673472 kernel: scsi host1: ahci Dec 13 14:18:22.673611 kernel: scsi host2: ahci Dec 13 14:18:22.673771 kernel: scsi host3: ahci Dec 13 14:18:22.673915 kernel: scsi host4: ahci Dec 13 14:18:22.674095 kernel: scsi host5: ahci Dec 13 14:18:22.674392 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (448) Dec 13 14:18:22.674409 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 31 Dec 13 14:18:22.674422 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 31 Dec 13 14:18:22.674439 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 31 Dec 13 14:18:22.674451 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 31 Dec 13 14:18:22.674465 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 31 Dec 13 14:18:22.674478 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 31 Dec 13 14:18:22.656132 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 14:18:22.662583 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 14:18:22.682045 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 14:18:22.683452 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 14:18:22.695318 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:18:22.697967 systemd[1]: Starting disk-uuid.service... Dec 13 14:18:22.707815 disk-uuid[530]: Primary Header is updated. Dec 13 14:18:22.707815 disk-uuid[530]: Secondary Entries is updated. Dec 13 14:18:22.707815 disk-uuid[530]: Secondary Header is updated. Dec 13 14:18:22.713331 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:18:22.717310 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:18:22.984545 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 13 14:18:22.984653 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 13 14:18:22.984667 kernel: ata3.00: applying bridge limits Dec 13 14:18:22.984679 kernel: ata3.00: configured for UDMA/100 Dec 13 14:18:22.984704 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 14:18:22.984716 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 14:18:22.986293 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 14:18:22.987298 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 14:18:22.988297 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 14:18:22.988320 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 14:18:23.023480 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 13 14:18:23.041132 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 14:18:23.041155 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 13 14:18:23.717815 disk-uuid[531]: The operation has completed successfully. Dec 13 14:18:23.719837 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:18:23.738965 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 14:18:23.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:23.739000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:23.739045 systemd[1]: Finished disk-uuid.service. Dec 13 14:18:23.746723 systemd[1]: Starting verity-setup.service... Dec 13 14:18:23.760295 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Dec 13 14:18:23.777651 systemd[1]: Found device dev-mapper-usr.device. Dec 13 14:18:23.779085 systemd[1]: Mounting sysusr-usr.mount... Dec 13 14:18:23.780850 systemd[1]: Finished verity-setup.service. Dec 13 14:18:23.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:23.854299 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 14:18:23.854810 systemd[1]: Mounted sysusr-usr.mount. Dec 13 14:18:23.855024 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 14:18:23.856588 systemd[1]: Starting ignition-setup.service... Dec 13 14:18:23.859655 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 14:18:23.872356 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:18:23.872395 kernel: BTRFS info (device vda6): using free space tree Dec 13 14:18:23.872405 kernel: BTRFS info (device vda6): has skinny extents Dec 13 14:18:23.882864 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 14:18:23.891771 systemd[1]: Finished ignition-setup.service. Dec 13 14:18:23.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:23.892882 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 14:18:23.936347 ignition[652]: Ignition 2.14.0 Dec 13 14:18:23.936363 ignition[652]: Stage: fetch-offline Dec 13 14:18:23.936461 ignition[652]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:18:23.936474 ignition[652]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:18:23.936608 ignition[652]: parsed url from cmdline: "" Dec 13 14:18:23.940105 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 14:18:23.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:23.943000 audit: BPF prog-id=9 op=LOAD Dec 13 14:18:23.936613 ignition[652]: no config URL provided Dec 13 14:18:23.936619 ignition[652]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:18:23.943850 systemd[1]: Starting systemd-networkd.service... Dec 13 14:18:23.936628 ignition[652]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:18:23.937377 ignition[652]: op(1): [started] loading QEMU firmware config module Dec 13 14:18:23.937387 ignition[652]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 14:18:23.945385 ignition[652]: op(1): [finished] loading QEMU firmware config module Dec 13 14:18:23.945405 ignition[652]: QEMU firmware config was not found. Ignoring... Dec 13 14:18:23.984003 systemd-networkd[722]: lo: Link UP Dec 13 14:18:23.984015 systemd-networkd[722]: lo: Gained carrier Dec 13 14:18:23.984698 systemd-networkd[722]: Enumeration completed Dec 13 14:18:23.984988 systemd-networkd[722]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:18:23.986277 systemd-networkd[722]: eth0: Link UP Dec 13 14:18:23.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:23.986282 systemd-networkd[722]: eth0: Gained carrier Dec 13 14:18:23.990371 systemd[1]: Started systemd-networkd.service. Dec 13 14:18:23.991504 systemd[1]: Reached target network.target. Dec 13 14:18:23.996121 ignition[652]: parsing config with SHA512: 8066b6d5979b594de3c1752164180a92bf9b4a7acaf60c49f8d2377b5561eace666aef877a4a82d890c112ed6e1d68731cc475e2dbad1dca6eb39d15ee1eab11 Dec 13 14:18:23.993317 systemd[1]: Starting iscsiuio.service... Dec 13 14:18:23.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:23.998336 systemd[1]: Started iscsiuio.service. Dec 13 14:18:24.000357 systemd[1]: Starting iscsid.service... Dec 13 14:18:24.004479 iscsid[727]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:18:24.004479 iscsid[727]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 14:18:24.004479 iscsid[727]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 14:18:24.004479 iscsid[727]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 14:18:24.004479 iscsid[727]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:18:24.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:24.017597 iscsid[727]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 14:18:24.006137 systemd[1]: Started iscsid.service. Dec 13 14:18:24.007097 systemd[1]: Starting dracut-initqueue.service... Dec 13 14:18:24.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:24.020538 systemd[1]: Finished dracut-initqueue.service. Dec 13 14:18:24.025954 ignition[652]: fetch-offline: fetch-offline passed Dec 13 14:18:24.022509 systemd[1]: Reached target remote-fs-pre.target. Dec 13 14:18:24.026026 ignition[652]: Ignition finished successfully Dec 13 14:18:24.023453 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:18:24.024385 unknown[652]: fetched base config from "system" Dec 13 14:18:24.024391 unknown[652]: fetched user config from "qemu" Dec 13 14:18:24.024436 systemd-networkd[722]: eth0: DHCPv4 address 10.0.0.24/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 14:18:24.024464 systemd[1]: Reached target remote-fs.target. Dec 13 14:18:24.026463 systemd[1]: Starting dracut-pre-mount.service... Dec 13 14:18:24.029348 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 14:18:24.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:24.038229 systemd[1]: Finished dracut-pre-mount.service. Dec 13 14:18:24.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:24.040188 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 14:18:24.043136 systemd[1]: Starting ignition-kargs.service... Dec 13 14:18:24.054684 ignition[742]: Ignition 2.14.0 Dec 13 14:18:24.054700 ignition[742]: Stage: kargs Dec 13 14:18:24.054818 ignition[742]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:18:24.054831 ignition[742]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:18:24.059679 ignition[742]: kargs: kargs passed Dec 13 14:18:24.060541 ignition[742]: Ignition finished successfully Dec 13 14:18:24.062641 systemd[1]: Finished ignition-kargs.service. Dec 13 14:18:24.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:24.065384 systemd[1]: Starting ignition-disks.service... Dec 13 14:18:24.073112 ignition[748]: Ignition 2.14.0 Dec 13 14:18:24.073122 ignition[748]: Stage: disks Dec 13 14:18:24.073221 ignition[748]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:18:24.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:24.075866 systemd[1]: Finished ignition-disks.service. Dec 13 14:18:24.073231 ignition[748]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:18:24.078013 systemd[1]: Reached target initrd-root-device.target. Dec 13 14:18:24.074340 ignition[748]: disks: disks passed Dec 13 14:18:24.078080 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:18:24.074385 ignition[748]: Ignition finished successfully Dec 13 14:18:24.078499 systemd[1]: Reached target local-fs.target. Dec 13 14:18:24.078674 systemd[1]: Reached target sysinit.target. Dec 13 14:18:24.078853 systemd[1]: Reached target basic.target. Dec 13 14:18:24.079772 systemd[1]: Starting systemd-fsck-root.service... Dec 13 14:18:24.093362 systemd-fsck[756]: ROOT: clean, 621/553520 files, 56021/553472 blocks Dec 13 14:18:24.099620 systemd[1]: Finished systemd-fsck-root.service. Dec 13 14:18:24.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:24.103299 systemd[1]: Mounting sysroot.mount... Dec 13 14:18:24.111056 systemd[1]: Mounted sysroot.mount. Dec 13 14:18:24.112463 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 14:18:24.112508 systemd[1]: Reached target initrd-root-fs.target. Dec 13 14:18:24.115274 systemd[1]: Mounting sysroot-usr.mount... Dec 13 14:18:24.117115 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 14:18:24.117167 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 14:18:24.118832 systemd[1]: Reached target ignition-diskful.target. Dec 13 14:18:24.123807 systemd[1]: Mounted sysroot-usr.mount. Dec 13 14:18:24.126764 systemd[1]: Starting initrd-setup-root.service... Dec 13 14:18:24.132608 initrd-setup-root[766]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 14:18:24.137532 initrd-setup-root[774]: cut: /sysroot/etc/group: No such file or directory Dec 13 14:18:24.141664 initrd-setup-root[782]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 14:18:24.144817 initrd-setup-root[790]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 14:18:24.179129 systemd[1]: Finished initrd-setup-root.service. Dec 13 14:18:24.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:24.181671 systemd[1]: Starting ignition-mount.service... Dec 13 14:18:24.183800 systemd[1]: Starting sysroot-boot.service... Dec 13 14:18:24.188196 bash[807]: umount: /sysroot/usr/share/oem: not mounted. Dec 13 14:18:24.198444 ignition[809]: INFO : Ignition 2.14.0 Dec 13 14:18:24.199609 ignition[809]: INFO : Stage: mount Dec 13 14:18:24.199609 ignition[809]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 14:18:24.199609 ignition[809]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:18:24.203700 ignition[809]: INFO : mount: mount passed Dec 13 14:18:24.203700 ignition[809]: INFO : Ignition finished successfully Dec 13 14:18:24.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:24.201909 systemd[1]: Finished ignition-mount.service. Dec 13 14:18:24.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:24.206752 systemd[1]: Finished sysroot-boot.service. Dec 13 14:18:24.788698 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:18:24.795279 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (817) Dec 13 14:18:24.797966 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:18:24.797981 kernel: BTRFS info (device vda6): using free space tree Dec 13 14:18:24.797995 kernel: BTRFS info (device vda6): has skinny extents Dec 13 14:18:24.803003 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:18:24.805155 systemd[1]: Starting ignition-files.service... Dec 13 14:18:24.820763 ignition[837]: INFO : Ignition 2.14.0 Dec 13 14:18:24.820763 ignition[837]: INFO : Stage: files Dec 13 14:18:24.822463 ignition[837]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 14:18:24.822463 ignition[837]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:18:24.825612 ignition[837]: DEBUG : files: compiled without relabeling support, skipping Dec 13 14:18:24.827512 ignition[837]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 14:18:24.827512 ignition[837]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 14:18:24.831642 ignition[837]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 14:18:24.833115 ignition[837]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 14:18:24.833115 ignition[837]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 14:18:24.832582 unknown[837]: wrote ssh authorized keys file for user: core Dec 13 14:18:24.837506 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 14:18:24.837506 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 14:18:24.837506 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 14:18:24.837506 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 14:18:24.882090 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 14:18:24.969365 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 14:18:24.969365 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 14:18:24.973758 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 14:18:24.973758 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:18:24.973758 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:18:24.973758 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:18:24.973758 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:18:24.973758 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:18:24.973758 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:18:24.973758 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:18:24.973758 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:18:24.973758 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:18:24.973758 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:18:24.973758 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:18:24.973758 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 14:18:25.470689 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 14:18:25.902416 systemd-networkd[722]: eth0: Gained IPv6LL Dec 13 14:18:26.447927 ignition[837]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 14:18:26.447927 ignition[837]: INFO : files: op(c): [started] processing unit "containerd.service" Dec 13 14:18:26.452519 ignition[837]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 14:18:26.452519 ignition[837]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 14:18:26.452519 ignition[837]: INFO : files: op(c): [finished] processing unit "containerd.service" Dec 13 14:18:26.452519 ignition[837]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Dec 13 14:18:26.452519 ignition[837]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:18:26.452519 ignition[837]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:18:26.452519 ignition[837]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Dec 13 14:18:26.452519 ignition[837]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Dec 13 14:18:26.452519 ignition[837]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 14:18:26.452519 ignition[837]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 14:18:26.452519 ignition[837]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Dec 13 14:18:26.452519 ignition[837]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Dec 13 14:18:26.452519 ignition[837]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 14:18:26.452519 ignition[837]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 14:18:26.452519 ignition[837]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 14:18:26.511041 ignition[837]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 14:18:26.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:26.514420 ignition[837]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 14:18:26.514420 ignition[837]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:18:26.514420 ignition[837]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:18:26.514420 ignition[837]: INFO : files: files passed Dec 13 14:18:26.514420 ignition[837]: INFO : Ignition finished successfully Dec 13 14:18:26.537381 kernel: kauditd_printk_skb: 24 callbacks suppressed Dec 13 14:18:26.537417 kernel: audit: type=1130 audit(1734099506.513:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:26.537433 kernel: audit: type=1130 audit(1734099506.527:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:26.537447 kernel: audit: type=1131 audit(1734099506.527:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:26.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:26.527000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:26.512964 systemd[1]: Finished ignition-files.service. Dec 13 14:18:26.515318 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 14:18:26.521969 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 14:18:26.541843 initrd-setup-root-after-ignition[863]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Dec 13 14:18:26.522696 systemd[1]: Starting ignition-quench.service... Dec 13 14:18:26.526191 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 14:18:26.526276 systemd[1]: Finished ignition-quench.service. Dec 13 14:18:26.547352 initrd-setup-root-after-ignition[865]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:18:26.549788 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 14:18:26.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:26.551853 systemd[1]: Reached target ignition-complete.target. Dec 13 14:18:26.556582 kernel: audit: type=1130 audit(1734099506.551:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:26.556686 systemd[1]: Starting initrd-parse-etc.service... Dec 13 14:18:26.573530 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 14:18:26.573662 systemd[1]: Finished initrd-parse-etc.service. Dec 13 14:18:26.582528 kernel: audit: type=1130 audit(1734099506.574:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:26.582553 kernel: audit: type=1131 audit(1734099506.574:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:26.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:26.574000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:26.574843 systemd[1]: Reached target initrd-fs.target. Dec 13 14:18:26.583380 systemd[1]: Reached target initrd.target. Dec 13 14:18:26.585552 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 14:18:26.586752 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 14:18:26.597831 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 14:18:26.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:26.600814 systemd[1]: Starting initrd-cleanup.service... Dec 13 14:18:26.604484 kernel: audit: type=1130 audit(1734099506.599:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:26.612704 systemd[1]: Stopped target nss-lookup.target. Dec 13 14:18:26.613744 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 14:18:26.615619 systemd[1]: Stopped target timers.target. Dec 13 14:18:26.617310 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 14:18:26.618448 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 14:18:26.624408 kernel: audit: type=1131 audit(1734099506.619:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:26.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:26.620202 systemd[1]: Stopped target initrd.target. Dec 13 14:18:26.624648 systemd[1]: Stopped target basic.target. Dec 13 14:18:26.626121 systemd[1]: Stopped target ignition-complete.target. Dec 13 14:18:26.627771 systemd[1]: Stopped target ignition-diskful.target. Dec 13 14:18:26.629613 systemd[1]: Stopped target initrd-root-device.target. Dec 13 14:18:26.631574 systemd[1]: Stopped target remote-fs.target. Dec 13 14:18:26.633512 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 14:18:26.635289 systemd[1]: Stopped target sysinit.target. Dec 13 14:18:26.637103 systemd[1]: Stopped target local-fs.target. Dec 13 14:18:26.638731 systemd[1]: Stopped target local-fs-pre.target. Dec 13 14:18:26.640556 systemd[1]: Stopped target swap.target. Dec 13 14:18:26.642187 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 14:18:26.648641 kernel: audit: type=1131 audit(1734099506.643:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:26.643000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:26.642333 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 14:18:26.644088 systemd[1]: Stopped target cryptsetup.target. Dec 13 14:18:26.654845 kernel: audit: type=1131 audit(1734099506.650:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:26.650000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:26.648683 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 14:18:26.654000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:26.648793 systemd[1]: Stopped dracut-initqueue.service. Dec 13 14:18:26.650476 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 14:18:26.650573 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 14:18:26.654972 systemd[1]: Stopped target paths.target. Dec 13 14:18:26.656672 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 14:18:26.660299 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 14:18:26.662327 systemd[1]: Stopped target slices.target. Dec 13 14:18:26.663967 systemd[1]: Stopped target sockets.target. Dec 13 14:18:26.665513 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 14:18:26.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:26.665608 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 14:18:26.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:26.667247 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 14:18:26.672502 iscsid[727]: iscsid shutting down. Dec 13 14:18:26.667340 systemd[1]: Stopped ignition-files.service. Dec 13 14:18:26.670052 systemd[1]: Stopping ignition-mount.service... Dec 13 14:18:26.677422 ignition[878]: INFO : Ignition 2.14.0 Dec 13 14:18:26.677422 ignition[878]: INFO : Stage: umount Dec 13 14:18:26.677422 ignition[878]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 14:18:26.677422 ignition[878]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:18:26.675000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:26.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:26.680000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:26.670975 systemd[1]: Stopping iscsid.service... Dec 13 14:18:26.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:26.683847 ignition[878]: INFO : umount: umount passed Dec 13 14:18:26.683847 ignition[878]: INFO : Ignition finished successfully Dec 13 14:18:26.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:26.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:26.673383 systemd[1]: Stopping sysroot-boot.service... Dec 13 14:18:26.688000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:26.674174 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 14:18:26.689000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:26.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:26.674377 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 14:18:26.675910 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 14:18:26.676041 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 14:18:26.679093 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 14:18:26.679183 systemd[1]: Stopped iscsid.service. Dec 13 14:18:26.681218 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 14:18:26.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:26.681307 systemd[1]: Stopped ignition-mount.service. Dec 13 14:18:26.684075 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 14:18:26.684143 systemd[1]: Finished initrd-cleanup.service. Dec 13 14:18:26.686236 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 14:18:26.686288 systemd[1]: Closed iscsid.socket. Dec 13 14:18:26.687152 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 14:18:26.687186 systemd[1]: Stopped ignition-disks.service. Dec 13 14:18:26.688994 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 14:18:26.689032 systemd[1]: Stopped ignition-kargs.service. Dec 13 14:18:26.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:26.689913 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 14:18:26.689952 systemd[1]: Stopped ignition-setup.service. Dec 13 14:18:26.691699 systemd[1]: Stopping iscsiuio.service... Dec 13 14:18:26.695950 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 14:18:26.696248 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 14:18:26.696338 systemd[1]: Stopped iscsiuio.service. Dec 13 14:18:26.697538 systemd[1]: Stopped target network.target. Dec 13 14:18:26.699250 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 14:18:26.699289 systemd[1]: Closed iscsiuio.socket. Dec 13 14:18:26.700823 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:18:26.702796 systemd[1]: Stopping systemd-resolved.service... Dec 13 14:18:26.707330 systemd-networkd[722]: eth0: DHCPv6 lease lost Dec 13 14:18:26.721000 audit: BPF prog-id=9 op=UNLOAD Dec 13 14:18:26.708445 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:18:26.708556 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:18:26.710106 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 14:18:26.710134 systemd[1]: Closed systemd-networkd.socket. Dec 13 14:18:26.722424 systemd[1]: Stopping network-cleanup.service... Dec 13 14:18:26.727131 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 14:18:26.728151 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 14:18:26.729000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:26.729910 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:18:26.729950 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:18:26.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:26.732606 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 14:18:26.732648 systemd[1]: Stopped systemd-modules-load.service. Dec 13 14:18:26.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:26.735417 systemd[1]: Stopping systemd-udevd.service... Dec 13 14:18:26.737954 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 14:18:26.738423 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 14:18:26.740000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:26.738521 systemd[1]: Stopped systemd-resolved.service. Dec 13 14:18:26.743000 audit: BPF prog-id=6 op=UNLOAD Dec 13 14:18:26.744539 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 14:18:26.745571 systemd[1]: Stopped systemd-udevd.service. Dec 13 14:18:26.746000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:26.747623 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 14:18:26.758191 systemd[1]: Stopped network-cleanup.service. Dec 13 14:18:26.759000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:26.759997 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 14:18:26.760033 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 14:18:26.762707 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 14:18:26.762741 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 14:18:26.765776 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 14:18:26.766869 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 14:18:26.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:26.768515 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 14:18:26.769452 systemd[1]: Stopped dracut-cmdline.service. Dec 13 14:18:26.777000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:26.777321 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:18:26.778330 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 14:18:26.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:26.780828 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 14:18:26.782607 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 14:18:26.782658 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Dec 13 14:18:26.785000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:26.785688 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 14:18:26.786000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:26.785726 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 14:18:26.788000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:26.787595 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:18:26.787632 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 14:18:26.792111 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 13 14:18:26.794017 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 14:18:26.795196 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 14:18:26.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:26.796000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:26.804852 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 14:18:26.804962 systemd[1]: Stopped sysroot-boot.service. Dec 13 14:18:26.807000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:26.807567 systemd[1]: Reached target initrd-switch-root.target. Dec 13 14:18:26.809653 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 14:18:26.810716 systemd[1]: Stopped initrd-setup-root.service. Dec 13 14:18:26.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:26.813160 systemd[1]: Starting initrd-switch-root.service... Dec 13 14:18:26.818625 systemd[1]: Switching root. Dec 13 14:18:26.821000 audit: BPF prog-id=5 op=UNLOAD Dec 13 14:18:26.821000 audit: BPF prog-id=4 op=UNLOAD Dec 13 14:18:26.821000 audit: BPF prog-id=3 op=UNLOAD Dec 13 14:18:26.823000 audit: BPF prog-id=8 op=UNLOAD Dec 13 14:18:26.823000 audit: BPF prog-id=7 op=UNLOAD Dec 13 14:18:26.838888 systemd-journald[198]: Journal stopped Dec 13 14:18:30.714032 systemd-journald[198]: Received SIGTERM from PID 1 (systemd). Dec 13 14:18:30.714106 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 14:18:30.714126 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 14:18:30.714140 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 14:18:30.714154 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 14:18:30.714171 kernel: SELinux: policy capability open_perms=1 Dec 13 14:18:30.714189 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 14:18:30.714203 kernel: SELinux: policy capability always_check_network=0 Dec 13 14:18:30.714217 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 14:18:30.714234 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 14:18:30.714248 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 14:18:30.714276 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 14:18:30.714304 systemd[1]: Successfully loaded SELinux policy in 42.596ms. Dec 13 14:18:30.714333 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.697ms. Dec 13 14:18:30.714354 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:18:30.714370 systemd[1]: Detected virtualization kvm. Dec 13 14:18:30.714385 systemd[1]: Detected architecture x86-64. Dec 13 14:18:30.714403 systemd[1]: Detected first boot. Dec 13 14:18:30.714418 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:18:30.714436 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 14:18:30.714451 systemd[1]: Populated /etc with preset unit settings. Dec 13 14:18:30.714466 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:18:30.714482 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:18:30.714499 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:18:30.714515 systemd[1]: Queued start job for default target multi-user.target. Dec 13 14:18:30.714530 systemd[1]: Unnecessary job was removed for dev-vda6.device. Dec 13 14:18:30.714545 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 14:18:30.714563 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 14:18:30.714578 systemd[1]: Created slice system-getty.slice. Dec 13 14:18:30.714593 systemd[1]: Created slice system-modprobe.slice. Dec 13 14:18:30.714608 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 14:18:30.714623 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 14:18:30.714637 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 14:18:30.714652 systemd[1]: Created slice user.slice. Dec 13 14:18:30.714667 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:18:30.714682 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 14:18:30.714703 systemd[1]: Set up automount boot.automount. Dec 13 14:18:30.714719 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 14:18:30.714735 systemd[1]: Reached target integritysetup.target. Dec 13 14:18:30.714749 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:18:30.714764 systemd[1]: Reached target remote-fs.target. Dec 13 14:18:30.714787 systemd[1]: Reached target slices.target. Dec 13 14:18:30.714803 systemd[1]: Reached target swap.target. Dec 13 14:18:30.714817 systemd[1]: Reached target torcx.target. Dec 13 14:18:30.714838 systemd[1]: Reached target veritysetup.target. Dec 13 14:18:30.714854 systemd[1]: Listening on systemd-coredump.socket. Dec 13 14:18:30.714868 systemd[1]: Listening on systemd-initctl.socket. Dec 13 14:18:30.714883 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:18:30.714897 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:18:30.714910 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:18:30.714925 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:18:30.714938 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:18:30.714953 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:18:30.714967 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 14:18:30.714984 systemd[1]: Mounting dev-hugepages.mount... Dec 13 14:18:30.714999 systemd[1]: Mounting dev-mqueue.mount... Dec 13 14:18:30.715013 systemd[1]: Mounting media.mount... Dec 13 14:18:30.715028 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:18:30.715043 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 14:18:30.715060 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 14:18:30.715074 systemd[1]: Mounting tmp.mount... Dec 13 14:18:30.715087 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 14:18:30.715101 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:18:30.715118 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:18:30.715132 systemd[1]: Starting modprobe@configfs.service... Dec 13 14:18:30.715146 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:18:30.715159 systemd[1]: Starting modprobe@drm.service... Dec 13 14:18:30.715174 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:18:30.715188 systemd[1]: Starting modprobe@fuse.service... Dec 13 14:18:30.715201 systemd[1]: Starting modprobe@loop.service... Dec 13 14:18:30.715216 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 14:18:30.715230 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Dec 13 14:18:30.715247 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Dec 13 14:18:30.715309 systemd[1]: Starting systemd-journald.service... Dec 13 14:18:30.715328 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:18:30.715352 kernel: fuse: init (API version 7.34) Dec 13 14:18:30.715371 systemd[1]: Starting systemd-network-generator.service... Dec 13 14:18:30.715383 systemd[1]: Starting systemd-remount-fs.service... Dec 13 14:18:30.715395 kernel: loop: module loaded Dec 13 14:18:30.715407 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:18:30.715422 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:18:30.715437 systemd[1]: Mounted dev-hugepages.mount. Dec 13 14:18:30.715450 systemd[1]: Mounted dev-mqueue.mount. Dec 13 14:18:30.715462 systemd[1]: Mounted media.mount. Dec 13 14:18:30.715475 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 14:18:30.715487 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 14:18:30.715499 systemd[1]: Mounted tmp.mount. Dec 13 14:18:30.715511 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:18:30.715530 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 14:18:30.715542 systemd[1]: Finished modprobe@configfs.service. Dec 13 14:18:30.715555 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:18:30.715567 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:18:30.715579 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:18:30.715595 systemd-journald[1028]: Journal started Dec 13 14:18:30.715644 systemd-journald[1028]: Runtime Journal (/run/log/journal/a0ec1259d3184c5e896d10c99879b319) is 6.0M, max 48.4M, 42.4M free. Dec 13 14:18:30.577000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 14:18:30.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:30.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:30.711000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:30.712000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 14:18:30.712000 audit[1028]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffd0d9d3830 a2=4000 a3=7ffd0d9d38cc items=0 ppid=1 pid=1028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:18:30.712000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 14:18:30.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:30.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:30.718274 systemd[1]: Finished modprobe@drm.service. Dec 13 14:18:30.719350 systemd[1]: Started systemd-journald.service. Dec 13 14:18:30.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:30.718000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:30.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:30.721417 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 14:18:30.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:30.722683 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:18:30.722891 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:18:30.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:30.723000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:30.723971 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 14:18:30.724124 systemd[1]: Finished modprobe@fuse.service. Dec 13 14:18:30.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:30.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:30.725287 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:18:30.725524 systemd[1]: Finished modprobe@loop.service. Dec 13 14:18:30.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:30.726000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:30.726836 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:18:30.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:30.728152 systemd[1]: Finished systemd-network-generator.service. Dec 13 14:18:30.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:30.729560 systemd[1]: Finished systemd-remount-fs.service. Dec 13 14:18:30.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:30.731047 systemd[1]: Reached target network-pre.target. Dec 13 14:18:30.733442 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 14:18:30.735506 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 14:18:30.736306 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 14:18:30.738327 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 14:18:30.742317 systemd[1]: Starting systemd-journal-flush.service... Dec 13 14:18:30.743358 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:18:30.744515 systemd[1]: Starting systemd-random-seed.service... Dec 13 14:18:30.745630 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:18:30.746735 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:18:30.750925 systemd[1]: Starting systemd-sysusers.service... Dec 13 14:18:30.796509 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:18:30.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:30.798171 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 14:18:30.799419 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 14:18:30.802056 systemd-journald[1028]: Time spent on flushing to /var/log/journal/a0ec1259d3184c5e896d10c99879b319 is 27.394ms for 1110 entries. Dec 13 14:18:30.802056 systemd-journald[1028]: System Journal (/var/log/journal/a0ec1259d3184c5e896d10c99879b319) is 8.0M, max 195.6M, 187.6M free. Dec 13 14:18:30.850289 systemd-journald[1028]: Received client request to flush runtime journal. Dec 13 14:18:30.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:30.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:30.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:30.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:30.801437 systemd[1]: Finished systemd-random-seed.service. Dec 13 14:18:30.804057 systemd[1]: Reached target first-boot-complete.target. Dec 13 14:18:30.850746 udevadm[1064]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 14:18:30.806828 systemd[1]: Starting systemd-udev-settle.service... Dec 13 14:18:30.817501 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:18:30.818740 systemd[1]: Finished systemd-sysusers.service. Dec 13 14:18:30.821099 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:18:30.841697 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:18:30.851774 systemd[1]: Finished systemd-journal-flush.service. Dec 13 14:18:30.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:31.558932 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 14:18:31.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:31.560804 kernel: kauditd_printk_skb: 77 callbacks suppressed Dec 13 14:18:31.560882 kernel: audit: type=1130 audit(1734099511.559:113): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:31.561238 systemd[1]: Starting systemd-udevd.service... Dec 13 14:18:31.579089 systemd-udevd[1074]: Using default interface naming scheme 'v252'. Dec 13 14:18:31.592923 systemd[1]: Started systemd-udevd.service. Dec 13 14:18:31.600446 kernel: audit: type=1130 audit(1734099511.593:114): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:31.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:31.596326 systemd[1]: Starting systemd-networkd.service... Dec 13 14:18:31.605445 systemd[1]: Starting systemd-userdbd.service... Dec 13 14:18:31.613544 systemd[1]: Found device dev-ttyS0.device. Dec 13 14:18:31.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:31.646946 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:18:31.648645 systemd[1]: Started systemd-userdbd.service. Dec 13 14:18:31.654358 kernel: audit: type=1130 audit(1734099511.649:115): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:31.683287 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 14:18:31.687297 kernel: ACPI: button: Power Button [PWRF] Dec 13 14:18:31.713237 systemd-networkd[1087]: lo: Link UP Dec 13 14:18:31.713250 systemd-networkd[1087]: lo: Gained carrier Dec 13 14:18:31.713821 systemd-networkd[1087]: Enumeration completed Dec 13 14:18:31.713957 systemd-networkd[1087]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:18:31.713972 systemd[1]: Started systemd-networkd.service. Dec 13 14:18:31.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:31.716980 systemd-networkd[1087]: eth0: Link UP Dec 13 14:18:31.716993 systemd-networkd[1087]: eth0: Gained carrier Dec 13 14:18:31.719287 kernel: audit: type=1130 audit(1734099511.714:116): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:31.731484 systemd-networkd[1087]: eth0: DHCPv4 address 10.0.0.24/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 14:18:31.710000 audit[1085]: AVC avc: denied { confidentiality } for pid=1085 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 14:18:31.738286 kernel: audit: type=1400 audit(1734099511.710:117): avc: denied { confidentiality } for pid=1085 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 14:18:31.710000 audit[1085]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5618bc194020 a1=337fc a2=7f7220155bc5 a3=5 items=110 ppid=1074 pid=1085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:18:31.710000 audit: CWD cwd="/" Dec 13 14:18:31.748880 kernel: audit: type=1300 audit(1734099511.710:117): arch=c000003e syscall=175 success=yes exit=0 a0=5618bc194020 a1=337fc a2=7f7220155bc5 a3=5 items=110 ppid=1074 pid=1085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:18:31.748938 kernel: audit: type=1307 audit(1734099511.710:117): cwd="/" Dec 13 14:18:31.748968 kernel: audit: type=1302 audit(1734099511.710:117): item=0 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=0 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=1 name=(null) inode=14586 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.755965 kernel: audit: type=1302 audit(1734099511.710:117): item=1 name=(null) inode=14586 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.756014 kernel: audit: type=1302 audit(1734099511.710:117): item=2 name=(null) inode=14586 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=2 name=(null) inode=14586 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=3 name=(null) inode=14587 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=4 name=(null) inode=14586 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=5 name=(null) inode=14588 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=6 name=(null) inode=14586 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=7 name=(null) inode=14589 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=8 name=(null) inode=14589 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=9 name=(null) inode=14590 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=10 name=(null) inode=14589 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=11 name=(null) inode=14591 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=12 name=(null) inode=14589 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=13 name=(null) inode=14592 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=14 name=(null) inode=14589 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=15 name=(null) inode=14593 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=16 name=(null) inode=14589 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=17 name=(null) inode=14594 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=18 name=(null) inode=14586 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=19 name=(null) inode=14595 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=20 name=(null) inode=14595 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=21 name=(null) inode=14596 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=22 name=(null) inode=14595 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=23 name=(null) inode=14597 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=24 name=(null) inode=14595 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=25 name=(null) inode=14598 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=26 name=(null) inode=14595 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=27 name=(null) inode=14599 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=28 name=(null) inode=14595 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=29 name=(null) inode=14600 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=30 name=(null) inode=14586 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=31 name=(null) inode=14601 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=32 name=(null) inode=14601 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=33 name=(null) inode=14602 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=34 name=(null) inode=14601 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=35 name=(null) inode=14603 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=36 name=(null) inode=14601 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=37 name=(null) inode=14604 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=38 name=(null) inode=14601 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=39 name=(null) inode=14605 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=40 name=(null) inode=14601 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=41 name=(null) inode=14606 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=42 name=(null) inode=14586 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=43 name=(null) inode=14607 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=44 name=(null) inode=14607 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=45 name=(null) inode=14608 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=46 name=(null) inode=14607 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=47 name=(null) inode=14609 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=48 name=(null) inode=14607 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=49 name=(null) inode=14610 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=50 name=(null) inode=14607 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=51 name=(null) inode=14611 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=52 name=(null) inode=14607 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=53 name=(null) inode=14612 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=54 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=55 name=(null) inode=14613 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=56 name=(null) inode=14613 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=57 name=(null) inode=14614 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=58 name=(null) inode=14613 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=59 name=(null) inode=14615 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=60 name=(null) inode=14613 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=61 name=(null) inode=14616 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=62 name=(null) inode=14616 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=63 name=(null) inode=14617 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=64 name=(null) inode=14616 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=65 name=(null) inode=14618 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=66 name=(null) inode=14616 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=67 name=(null) inode=14619 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=68 name=(null) inode=14616 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=69 name=(null) inode=14620 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=70 name=(null) inode=14616 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=71 name=(null) inode=14621 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=72 name=(null) inode=14613 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=73 name=(null) inode=14622 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=74 name=(null) inode=14622 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=75 name=(null) inode=14623 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=76 name=(null) inode=14622 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=77 name=(null) inode=14624 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=78 name=(null) inode=14622 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=79 name=(null) inode=14625 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=80 name=(null) inode=14622 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=81 name=(null) inode=14626 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=82 name=(null) inode=14622 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=83 name=(null) inode=14627 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=84 name=(null) inode=14613 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=85 name=(null) inode=14628 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=86 name=(null) inode=14628 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=87 name=(null) inode=14629 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=88 name=(null) inode=14628 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=89 name=(null) inode=14630 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=90 name=(null) inode=14628 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=91 name=(null) inode=14631 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=92 name=(null) inode=14628 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=93 name=(null) inode=14632 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=94 name=(null) inode=14628 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=95 name=(null) inode=14633 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=96 name=(null) inode=14613 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=97 name=(null) inode=14634 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=98 name=(null) inode=14634 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=99 name=(null) inode=14635 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=100 name=(null) inode=14634 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=101 name=(null) inode=14636 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=102 name=(null) inode=14634 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=103 name=(null) inode=14637 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=104 name=(null) inode=14634 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=105 name=(null) inode=14638 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=106 name=(null) inode=14634 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=107 name=(null) inode=14639 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PATH item=109 name=(null) inode=14267 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:18:31.710000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 14:18:31.776227 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Dec 13 14:18:31.779306 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 14:18:31.779465 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 14:18:31.779632 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 14:18:31.785307 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 14:18:31.792281 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 14:18:31.816935 kernel: kvm: Nested Virtualization enabled Dec 13 14:18:31.817034 kernel: SVM: kvm: Nested Paging enabled Dec 13 14:18:31.818639 kernel: SVM: Virtual VMLOAD VMSAVE supported Dec 13 14:18:31.818685 kernel: SVM: Virtual GIF supported Dec 13 14:18:31.835290 kernel: EDAC MC: Ver: 3.0.0 Dec 13 14:18:31.862705 systemd[1]: Finished systemd-udev-settle.service. Dec 13 14:18:31.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:31.864918 systemd[1]: Starting lvm2-activation-early.service... Dec 13 14:18:31.872079 lvm[1111]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:18:31.899600 systemd[1]: Finished lvm2-activation-early.service. Dec 13 14:18:31.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:31.900960 systemd[1]: Reached target cryptsetup.target. Dec 13 14:18:31.903514 systemd[1]: Starting lvm2-activation.service... Dec 13 14:18:31.907631 lvm[1113]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:18:31.936688 systemd[1]: Finished lvm2-activation.service. Dec 13 14:18:31.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:31.937884 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:18:31.938763 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 14:18:31.938788 systemd[1]: Reached target local-fs.target. Dec 13 14:18:31.939625 systemd[1]: Reached target machines.target. Dec 13 14:18:31.941675 systemd[1]: Starting ldconfig.service... Dec 13 14:18:31.942784 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:18:31.942848 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:18:31.943885 systemd[1]: Starting systemd-boot-update.service... Dec 13 14:18:31.945805 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 14:18:31.948513 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 14:18:31.951233 systemd[1]: Starting systemd-sysext.service... Dec 13 14:18:31.952718 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1116 (bootctl) Dec 13 14:18:31.954059 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 14:18:31.959338 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 14:18:31.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:31.963070 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 14:18:31.967201 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 14:18:31.967543 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 14:18:32.048285 kernel: loop0: detected capacity change from 0 to 211296 Dec 13 14:18:32.067234 systemd-fsck[1126]: fsck.fat 4.2 (2021-01-31) Dec 13 14:18:32.067234 systemd-fsck[1126]: /dev/vda1: 790 files, 119311/258078 clusters Dec 13 14:18:32.068669 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 14:18:32.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.071570 systemd[1]: Mounting boot.mount... Dec 13 14:18:32.087033 systemd[1]: Mounted boot.mount. Dec 13 14:18:32.473640 systemd[1]: Finished systemd-boot-update.service. Dec 13 14:18:32.475293 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 14:18:32.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.491352 kernel: loop1: detected capacity change from 0 to 211296 Dec 13 14:18:32.493000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.491850 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 14:18:32.492582 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 14:18:32.497656 (sd-sysext)[1136]: Using extensions 'kubernetes'. Dec 13 14:18:32.498082 (sd-sysext)[1136]: Merged extensions into '/usr'. Dec 13 14:18:32.548456 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:18:32.549989 systemd[1]: Mounting usr-share-oem.mount... Dec 13 14:18:32.551067 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:18:32.553130 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:18:32.555767 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:18:32.558119 systemd[1]: Starting modprobe@loop.service... Dec 13 14:18:32.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.564000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.559196 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:18:32.559342 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:18:32.559463 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:18:32.562321 systemd[1]: Mounted usr-share-oem.mount. Dec 13 14:18:32.563819 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:18:32.564012 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:18:32.565563 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:18:32.565707 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:18:32.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.567000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.568098 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:18:32.568278 systemd[1]: Finished modprobe@loop.service. Dec 13 14:18:32.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.569000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.569818 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:18:32.569923 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:18:32.571557 systemd[1]: Finished systemd-sysext.service. Dec 13 14:18:32.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.574244 systemd[1]: Starting ensure-sysext.service... Dec 13 14:18:32.576441 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 14:18:32.580553 systemd[1]: Reloading. Dec 13 14:18:32.591639 systemd-tmpfiles[1151]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 14:18:32.592644 systemd-tmpfiles[1151]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 14:18:32.594964 systemd-tmpfiles[1151]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 14:18:32.607619 ldconfig[1115]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 14:18:32.694023 /usr/lib/systemd/system-generators/torcx-generator[1201]: time="2024-12-13T14:18:32Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:18:32.694061 /usr/lib/systemd/system-generators/torcx-generator[1201]: time="2024-12-13T14:18:32Z" level=info msg="torcx already run" Dec 13 14:18:32.727504 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:18:32.727523 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:18:32.751841 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:18:32.814955 systemd[1]: Finished ldconfig.service. Dec 13 14:18:32.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.816866 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 14:18:32.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.819907 systemd[1]: Starting audit-rules.service... Dec 13 14:18:32.822070 systemd[1]: Starting clean-ca-certificates.service... Dec 13 14:18:32.824296 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 14:18:32.826688 systemd[1]: Starting systemd-resolved.service... Dec 13 14:18:32.828764 systemd[1]: Starting systemd-timesyncd.service... Dec 13 14:18:32.830945 systemd[1]: Starting systemd-update-utmp.service... Dec 13 14:18:32.832834 systemd[1]: Finished clean-ca-certificates.service. Dec 13 14:18:32.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.836777 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:18:32.839553 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:18:32.841328 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:18:32.843549 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:18:32.845513 systemd[1]: Starting modprobe@loop.service... Dec 13 14:18:32.846324 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:18:32.846489 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:18:32.846623 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:18:32.847000 audit[1234]: SYSTEM_BOOT pid=1234 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.847802 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:18:32.847965 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:18:32.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.849241 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:18:32.849395 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:18:32.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.850000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.850784 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:18:32.851000 systemd[1]: Finished modprobe@loop.service. Dec 13 14:18:32.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.851000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.853748 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:18:32.853898 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:18:32.855684 systemd[1]: Finished systemd-update-utmp.service. Dec 13 14:18:32.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.858009 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:18:32.859522 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:18:32.862917 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:18:32.865229 systemd[1]: Starting modprobe@loop.service... Dec 13 14:18:32.866102 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:18:32.866293 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:18:32.866464 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:18:32.867806 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 14:18:32.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.869396 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:18:32.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.873000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:32.876000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 14:18:32.876000 audit[1259]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd4a631390 a2=420 a3=0 items=0 ppid=1222 pid=1259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:18:32.876000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 14:18:32.869577 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:18:32.877189 augenrules[1259]: No rules Dec 13 14:18:32.871016 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:18:32.871206 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:18:32.872686 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:18:32.872896 systemd[1]: Finished modprobe@loop.service. Dec 13 14:18:32.874160 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:18:32.874295 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:18:32.876556 systemd[1]: Starting systemd-update-done.service... Dec 13 14:18:32.878858 systemd[1]: Finished audit-rules.service. Dec 13 14:18:32.883486 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:18:32.885130 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:18:32.888072 systemd[1]: Starting modprobe@drm.service... Dec 13 14:18:32.890156 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:18:32.893061 systemd[1]: Starting modprobe@loop.service... Dec 13 14:18:32.893977 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:18:32.894132 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:18:32.896180 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:18:32.897465 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:18:32.899037 systemd[1]: Finished systemd-update-done.service. Dec 13 14:18:32.900474 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:18:32.900656 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:18:32.903713 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:18:32.903872 systemd[1]: Finished modprobe@drm.service. Dec 13 14:18:32.905226 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:18:32.905429 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:18:32.906697 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:18:32.906857 systemd[1]: Finished modprobe@loop.service. Dec 13 14:18:32.908189 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:18:32.908337 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:18:32.909667 systemd[1]: Finished ensure-sysext.service. Dec 13 14:18:32.947567 systemd-resolved[1229]: Positive Trust Anchors: Dec 13 14:18:32.947584 systemd-resolved[1229]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:18:32.947616 systemd-resolved[1229]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:18:32.947920 systemd[1]: Started systemd-timesyncd.service. Dec 13 14:18:32.949246 systemd[1]: Reached target time-set.target. Dec 13 14:18:33.439262 systemd-timesyncd[1233]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 14:18:33.439307 systemd-timesyncd[1233]: Initial clock synchronization to Fri 2024-12-13 14:18:33.439175 UTC. Dec 13 14:18:33.444283 systemd-resolved[1229]: Defaulting to hostname 'linux'. Dec 13 14:18:33.446150 systemd[1]: Started systemd-resolved.service. Dec 13 14:18:33.447238 systemd[1]: Reached target network.target. Dec 13 14:18:33.448132 systemd[1]: Reached target nss-lookup.target. Dec 13 14:18:33.449029 systemd[1]: Reached target sysinit.target. Dec 13 14:18:33.449955 systemd[1]: Started motdgen.path. Dec 13 14:18:33.450798 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 14:18:33.452131 systemd[1]: Started logrotate.timer. Dec 13 14:18:33.452997 systemd[1]: Started mdadm.timer. Dec 13 14:18:33.453724 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 14:18:33.454640 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 14:18:33.454673 systemd[1]: Reached target paths.target. Dec 13 14:18:33.455457 systemd[1]: Reached target timers.target. Dec 13 14:18:33.456752 systemd[1]: Listening on dbus.socket. Dec 13 14:18:33.459364 systemd[1]: Starting docker.socket... Dec 13 14:18:33.461175 systemd[1]: Listening on sshd.socket. Dec 13 14:18:33.462054 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:18:33.463756 systemd[1]: Listening on docker.socket. Dec 13 14:18:33.464548 systemd[1]: Reached target sockets.target. Dec 13 14:18:33.465324 systemd[1]: Reached target basic.target. Dec 13 14:18:33.466193 systemd[1]: System is tainted: cgroupsv1 Dec 13 14:18:33.466223 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:18:33.466253 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:18:33.466270 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:18:33.467225 systemd[1]: Starting containerd.service... Dec 13 14:18:33.469036 systemd[1]: Starting dbus.service... Dec 13 14:18:33.470767 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 14:18:33.472982 systemd[1]: Starting extend-filesystems.service... Dec 13 14:18:33.474082 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 14:18:33.475528 systemd[1]: Starting motdgen.service... Dec 13 14:18:33.477585 systemd[1]: Starting prepare-helm.service... Dec 13 14:18:33.479778 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 14:18:33.481937 jq[1284]: false Dec 13 14:18:33.483342 systemd[1]: Starting sshd-keygen.service... Dec 13 14:18:33.485977 systemd[1]: Starting systemd-logind.service... Dec 13 14:18:33.486810 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:18:33.486881 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 14:18:33.487923 systemd[1]: Starting update-engine.service... Dec 13 14:18:33.489830 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 14:18:33.490869 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:18:33.492498 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 14:18:33.499105 jq[1299]: true Dec 13 14:18:33.494402 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 14:18:33.495189 systemd-networkd[1087]: eth0: Gained IPv6LL Dec 13 14:18:33.495735 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 14:18:33.495958 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 14:18:33.505159 tar[1307]: linux-amd64/helm Dec 13 14:18:33.505608 extend-filesystems[1285]: Found loop1 Dec 13 14:18:33.506882 extend-filesystems[1285]: Found sr0 Dec 13 14:18:33.506882 extend-filesystems[1285]: Found vda Dec 13 14:18:33.506882 extend-filesystems[1285]: Found vda1 Dec 13 14:18:33.506882 extend-filesystems[1285]: Found vda2 Dec 13 14:18:33.506882 extend-filesystems[1285]: Found vda3 Dec 13 14:18:33.506882 extend-filesystems[1285]: Found usr Dec 13 14:18:33.506882 extend-filesystems[1285]: Found vda4 Dec 13 14:18:33.506882 extend-filesystems[1285]: Found vda6 Dec 13 14:18:33.506882 extend-filesystems[1285]: Found vda7 Dec 13 14:18:33.506882 extend-filesystems[1285]: Found vda9 Dec 13 14:18:33.506882 extend-filesystems[1285]: Checking size of /dev/vda9 Dec 13 14:18:33.567963 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 14:18:33.567992 jq[1310]: true Dec 13 14:18:33.568113 extend-filesystems[1285]: Resized partition /dev/vda9 Dec 13 14:18:33.542882 dbus-daemon[1282]: [system] SELinux support is enabled Dec 13 14:18:33.539656 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:18:33.570752 extend-filesystems[1320]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 14:18:33.541323 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 14:18:33.541569 systemd[1]: Finished motdgen.service. Dec 13 14:18:33.543911 systemd[1]: Started dbus.service. Dec 13 14:18:33.551287 systemd[1]: Reached target network-online.target. Dec 13 14:18:33.558959 systemd[1]: Starting kubelet.service... Dec 13 14:18:33.559902 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 14:18:33.559939 systemd[1]: Reached target system-config.target. Dec 13 14:18:33.560987 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 14:18:33.561001 systemd[1]: Reached target user-config.target. Dec 13 14:18:33.610535 update_engine[1298]: I1213 14:18:33.609905 1298 main.cc:92] Flatcar Update Engine starting Dec 13 14:18:33.613403 update_engine[1298]: I1213 14:18:33.613301 1298 update_check_scheduler.cc:74] Next update check in 10m14s Dec 13 14:18:33.623221 systemd[1]: Started update-engine.service. Dec 13 14:18:33.626771 systemd[1]: Started locksmithd.service. Dec 13 14:18:33.640875 env[1311]: time="2024-12-13T14:18:33.640791643Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 14:18:33.697063 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 14:18:33.701501 env[1311]: time="2024-12-13T14:18:33.701425944Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 14:18:33.727412 env[1311]: time="2024-12-13T14:18:33.726337753Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:18:33.726673 systemd-logind[1295]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 14:18:33.726691 systemd-logind[1295]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 14:18:33.729227 env[1311]: time="2024-12-13T14:18:33.729131232Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:18:33.729294 env[1311]: time="2024-12-13T14:18:33.729220078Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:18:33.729955 env[1311]: time="2024-12-13T14:18:33.729897479Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:18:33.730323 env[1311]: time="2024-12-13T14:18:33.730288271Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 14:18:33.730497 env[1311]: time="2024-12-13T14:18:33.730438533Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 14:18:33.730646 env[1311]: time="2024-12-13T14:18:33.730604675Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 14:18:33.730903 env[1311]: time="2024-12-13T14:18:33.730869712Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:18:33.730986 extend-filesystems[1320]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 14:18:33.730986 extend-filesystems[1320]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 14:18:33.730986 extend-filesystems[1320]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 14:18:33.759831 extend-filesystems[1285]: Resized filesystem in /dev/vda9 Dec 13 14:18:33.760989 bash[1341]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:18:33.761117 env[1311]: time="2024-12-13T14:18:33.731488763Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:18:33.761117 env[1311]: time="2024-12-13T14:18:33.731761785Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:18:33.761117 env[1311]: time="2024-12-13T14:18:33.731778566Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 14:18:33.761117 env[1311]: time="2024-12-13T14:18:33.731832207Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 14:18:33.761117 env[1311]: time="2024-12-13T14:18:33.731847596Z" level=info msg="metadata content store policy set" policy=shared Dec 13 14:18:33.761117 env[1311]: time="2024-12-13T14:18:33.754982562Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 14:18:33.761117 env[1311]: time="2024-12-13T14:18:33.755068333Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 14:18:33.761117 env[1311]: time="2024-12-13T14:18:33.755085786Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 14:18:33.761117 env[1311]: time="2024-12-13T14:18:33.755203737Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 14:18:33.761117 env[1311]: time="2024-12-13T14:18:33.755226730Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 14:18:33.761117 env[1311]: time="2024-12-13T14:18:33.755240987Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 14:18:33.761117 env[1311]: time="2024-12-13T14:18:33.755255714Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 14:18:33.761117 env[1311]: time="2024-12-13T14:18:33.755304325Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 14:18:33.731795 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 14:18:33.761804 env[1311]: time="2024-12-13T14:18:33.755321287Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 14:18:33.761804 env[1311]: time="2024-12-13T14:18:33.755335053Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 14:18:33.761804 env[1311]: time="2024-12-13T14:18:33.755380098Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 14:18:33.761804 env[1311]: time="2024-12-13T14:18:33.755402970Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 14:18:33.761804 env[1311]: time="2024-12-13T14:18:33.755617062Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 14:18:33.761804 env[1311]: time="2024-12-13T14:18:33.755840792Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 14:18:33.761804 env[1311]: time="2024-12-13T14:18:33.756770946Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 14:18:33.761804 env[1311]: time="2024-12-13T14:18:33.756882535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 14:18:33.761804 env[1311]: time="2024-12-13T14:18:33.756907362Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 14:18:33.761804 env[1311]: time="2024-12-13T14:18:33.757607174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 14:18:33.761804 env[1311]: time="2024-12-13T14:18:33.757629155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 14:18:33.761804 env[1311]: time="2024-12-13T14:18:33.757646057Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 14:18:33.761804 env[1311]: time="2024-12-13T14:18:33.757673418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 14:18:33.761804 env[1311]: time="2024-12-13T14:18:33.757684419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 14:18:33.732051 systemd[1]: Finished extend-filesystems.service. Dec 13 14:18:33.762272 env[1311]: time="2024-12-13T14:18:33.757700730Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 14:18:33.762272 env[1311]: time="2024-12-13T14:18:33.757711730Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 14:18:33.762272 env[1311]: time="2024-12-13T14:18:33.757721569Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 14:18:33.762272 env[1311]: time="2024-12-13T14:18:33.757768677Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 14:18:33.762272 env[1311]: time="2024-12-13T14:18:33.758543560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 14:18:33.762272 env[1311]: time="2024-12-13T14:18:33.758574719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 14:18:33.762272 env[1311]: time="2024-12-13T14:18:33.758606849Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 14:18:33.762272 env[1311]: time="2024-12-13T14:18:33.758625424Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 14:18:33.762272 env[1311]: time="2024-12-13T14:18:33.758661351Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 14:18:33.762272 env[1311]: time="2024-12-13T14:18:33.758709792Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 14:18:33.762272 env[1311]: time="2024-12-13T14:18:33.758748214Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 14:18:33.762272 env[1311]: time="2024-12-13T14:18:33.758885271Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 14:18:33.748971 systemd-logind[1295]: New seat seat0. Dec 13 14:18:33.761893 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 14:18:33.762833 env[1311]: time="2024-12-13T14:18:33.759686634Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 14:18:33.762833 env[1311]: time="2024-12-13T14:18:33.759756976Z" level=info msg="Connect containerd service" Dec 13 14:18:33.762833 env[1311]: time="2024-12-13T14:18:33.759821206Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 14:18:33.764218 systemd[1]: Started systemd-logind.service. Dec 13 14:18:33.773859 env[1311]: time="2024-12-13T14:18:33.773799801Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:18:33.774507 env[1311]: time="2024-12-13T14:18:33.774306401Z" level=info msg="Start subscribing containerd event" Dec 13 14:18:33.774630 env[1311]: time="2024-12-13T14:18:33.774609720Z" level=info msg="Start recovering state" Dec 13 14:18:33.774815 env[1311]: time="2024-12-13T14:18:33.774799746Z" level=info msg="Start event monitor" Dec 13 14:18:33.775348 env[1311]: time="2024-12-13T14:18:33.775155353Z" level=info msg="Start snapshots syncer" Dec 13 14:18:33.775497 env[1311]: time="2024-12-13T14:18:33.775443494Z" level=info msg="Start cni network conf syncer for default" Dec 13 14:18:33.775599 env[1311]: time="2024-12-13T14:18:33.775573107Z" level=info msg="Start streaming server" Dec 13 14:18:33.778849 env[1311]: time="2024-12-13T14:18:33.778829985Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 14:18:33.779051 env[1311]: time="2024-12-13T14:18:33.779032815Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 14:18:33.779428 env[1311]: time="2024-12-13T14:18:33.779399322Z" level=info msg="containerd successfully booted in 0.139626s" Dec 13 14:18:33.779613 systemd[1]: Started containerd.service. Dec 13 14:18:33.808766 locksmithd[1347]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 14:18:34.407378 tar[1307]: linux-amd64/LICENSE Dec 13 14:18:34.407598 tar[1307]: linux-amd64/README.md Dec 13 14:18:34.413254 systemd[1]: Finished prepare-helm.service. Dec 13 14:18:34.498105 sshd_keygen[1315]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 14:18:34.519911 systemd[1]: Finished sshd-keygen.service. Dec 13 14:18:34.522531 systemd[1]: Starting issuegen.service... Dec 13 14:18:34.530205 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 14:18:34.530426 systemd[1]: Finished issuegen.service. Dec 13 14:18:34.532869 systemd[1]: Starting systemd-user-sessions.service... Dec 13 14:18:34.540074 systemd[1]: Finished systemd-user-sessions.service. Dec 13 14:18:34.543178 systemd[1]: Started getty@tty1.service. Dec 13 14:18:34.546242 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 14:18:34.547754 systemd[1]: Reached target getty.target. Dec 13 14:18:34.963096 systemd[1]: Started kubelet.service. Dec 13 14:18:34.964599 systemd[1]: Reached target multi-user.target. Dec 13 14:18:34.967319 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 14:18:34.973513 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 14:18:34.973708 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 14:18:34.976147 systemd[1]: Startup finished in 5.862s (kernel) + 7.598s (userspace) = 13.461s. Dec 13 14:18:35.849412 kubelet[1386]: E1213 14:18:35.849309 1386 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:18:35.851144 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:18:35.851282 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:18:35.893029 systemd[1]: Created slice system-sshd.slice. Dec 13 14:18:35.894868 systemd[1]: Started sshd@0-10.0.0.24:22-10.0.0.1:49820.service. Dec 13 14:18:35.939589 sshd[1398]: Accepted publickey for core from 10.0.0.1 port 49820 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg Dec 13 14:18:35.942628 sshd[1398]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:18:35.952002 systemd[1]: Created slice user-500.slice. Dec 13 14:18:35.953199 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 14:18:35.955075 systemd-logind[1295]: New session 1 of user core. Dec 13 14:18:35.963583 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 14:18:35.965837 systemd[1]: Starting user@500.service... Dec 13 14:18:35.969244 (systemd)[1403]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:18:36.040509 systemd[1403]: Queued start job for default target default.target. Dec 13 14:18:36.040786 systemd[1403]: Reached target paths.target. Dec 13 14:18:36.040807 systemd[1403]: Reached target sockets.target. Dec 13 14:18:36.040824 systemd[1403]: Reached target timers.target. Dec 13 14:18:36.040840 systemd[1403]: Reached target basic.target. Dec 13 14:18:36.040893 systemd[1403]: Reached target default.target. Dec 13 14:18:36.040921 systemd[1403]: Startup finished in 65ms. Dec 13 14:18:36.041155 systemd[1]: Started user@500.service. Dec 13 14:18:36.042496 systemd[1]: Started session-1.scope. Dec 13 14:18:36.094582 systemd[1]: Started sshd@1-10.0.0.24:22-10.0.0.1:49840.service. Dec 13 14:18:36.127848 sshd[1412]: Accepted publickey for core from 10.0.0.1 port 49840 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg Dec 13 14:18:36.129369 sshd[1412]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:18:36.134074 systemd-logind[1295]: New session 2 of user core. Dec 13 14:18:36.135145 systemd[1]: Started session-2.scope. Dec 13 14:18:36.192593 sshd[1412]: pam_unix(sshd:session): session closed for user core Dec 13 14:18:36.194817 systemd[1]: Started sshd@2-10.0.0.24:22-10.0.0.1:49866.service. Dec 13 14:18:36.195746 systemd[1]: sshd@1-10.0.0.24:22-10.0.0.1:49840.service: Deactivated successfully. Dec 13 14:18:36.196663 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 14:18:36.196707 systemd-logind[1295]: Session 2 logged out. Waiting for processes to exit. Dec 13 14:18:36.197867 systemd-logind[1295]: Removed session 2. Dec 13 14:18:36.228261 sshd[1417]: Accepted publickey for core from 10.0.0.1 port 49866 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg Dec 13 14:18:36.229453 sshd[1417]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:18:36.233257 systemd-logind[1295]: New session 3 of user core. Dec 13 14:18:36.233962 systemd[1]: Started session-3.scope. Dec 13 14:18:36.283507 sshd[1417]: pam_unix(sshd:session): session closed for user core Dec 13 14:18:36.286097 systemd[1]: Started sshd@3-10.0.0.24:22-10.0.0.1:49884.service. Dec 13 14:18:36.286748 systemd[1]: sshd@2-10.0.0.24:22-10.0.0.1:49866.service: Deactivated successfully. Dec 13 14:18:36.287931 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 14:18:36.287936 systemd-logind[1295]: Session 3 logged out. Waiting for processes to exit. Dec 13 14:18:36.289290 systemd-logind[1295]: Removed session 3. Dec 13 14:18:36.319492 sshd[1424]: Accepted publickey for core from 10.0.0.1 port 49884 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg Dec 13 14:18:36.320698 sshd[1424]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:18:36.324204 systemd-logind[1295]: New session 4 of user core. Dec 13 14:18:36.324998 systemd[1]: Started session-4.scope. Dec 13 14:18:36.379609 sshd[1424]: pam_unix(sshd:session): session closed for user core Dec 13 14:18:36.381968 systemd[1]: Started sshd@4-10.0.0.24:22-10.0.0.1:49892.service. Dec 13 14:18:36.383004 systemd[1]: sshd@3-10.0.0.24:22-10.0.0.1:49884.service: Deactivated successfully. Dec 13 14:18:36.383676 systemd-logind[1295]: Session 4 logged out. Waiting for processes to exit. Dec 13 14:18:36.383725 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 14:18:36.384569 systemd-logind[1295]: Removed session 4. Dec 13 14:18:36.414895 sshd[1431]: Accepted publickey for core from 10.0.0.1 port 49892 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg Dec 13 14:18:36.416191 sshd[1431]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:18:36.419728 systemd-logind[1295]: New session 5 of user core. Dec 13 14:18:36.420669 systemd[1]: Started session-5.scope. Dec 13 14:18:36.479178 sudo[1437]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 14:18:36.479436 sudo[1437]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:18:36.489285 dbus-daemon[1282]: \xd0\xfdy\u0005\u001cV: received setenforce notice (enforcing=1247978176) Dec 13 14:18:36.491247 sudo[1437]: pam_unix(sudo:session): session closed for user root Dec 13 14:18:36.492979 sshd[1431]: pam_unix(sshd:session): session closed for user core Dec 13 14:18:36.496704 systemd[1]: Started sshd@5-10.0.0.24:22-10.0.0.1:49914.service. Dec 13 14:18:36.497368 systemd[1]: sshd@4-10.0.0.24:22-10.0.0.1:49892.service: Deactivated successfully. Dec 13 14:18:36.498622 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 14:18:36.498658 systemd-logind[1295]: Session 5 logged out. Waiting for processes to exit. Dec 13 14:18:36.500046 systemd-logind[1295]: Removed session 5. Dec 13 14:18:36.527378 sshd[1440]: Accepted publickey for core from 10.0.0.1 port 49914 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg Dec 13 14:18:36.528557 sshd[1440]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:18:36.532787 systemd-logind[1295]: New session 6 of user core. Dec 13 14:18:36.533858 systemd[1]: Started session-6.scope. Dec 13 14:18:36.587973 sudo[1446]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 14:18:36.588273 sudo[1446]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:18:36.591867 sudo[1446]: pam_unix(sudo:session): session closed for user root Dec 13 14:18:36.597038 sudo[1445]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 14:18:36.597225 sudo[1445]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:18:36.606648 systemd[1]: Stopping audit-rules.service... Dec 13 14:18:36.607000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 13 14:18:36.607000 audit[1449]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffca1ff4810 a2=420 a3=0 items=0 ppid=1 pid=1449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:18:36.607000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Dec 13 14:18:36.608540 auditctl[1449]: No rules Dec 13 14:18:36.608767 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 14:18:36.609074 systemd[1]: Stopped audit-rules.service. Dec 13 14:18:36.608000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:36.611212 systemd[1]: Starting audit-rules.service... Dec 13 14:18:36.631548 augenrules[1467]: No rules Dec 13 14:18:36.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:36.632000 audit[1445]: USER_END pid=1445 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:18:36.632000 audit[1445]: CRED_DISP pid=1445 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:18:36.632291 systemd[1]: Finished audit-rules.service. Dec 13 14:18:36.633228 sudo[1445]: pam_unix(sudo:session): session closed for user root Dec 13 14:18:36.635000 audit[1440]: USER_END pid=1440 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:18:36.635000 audit[1440]: CRED_DISP pid=1440 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:18:36.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.24:22-10.0.0.1:49930 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:36.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.24:22-10.0.0.1:49914 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:36.634718 sshd[1440]: pam_unix(sshd:session): session closed for user core Dec 13 14:18:36.637882 systemd[1]: Started sshd@6-10.0.0.24:22-10.0.0.1:49930.service. Dec 13 14:18:36.638457 systemd[1]: sshd@5-10.0.0.24:22-10.0.0.1:49914.service: Deactivated successfully. Dec 13 14:18:36.639304 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 14:18:36.639839 systemd-logind[1295]: Session 6 logged out. Waiting for processes to exit. Dec 13 14:18:36.640603 systemd-logind[1295]: Removed session 6. Dec 13 14:18:36.669000 audit[1473]: USER_ACCT pid=1473 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:18:36.670950 sshd[1473]: Accepted publickey for core from 10.0.0.1 port 49930 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg Dec 13 14:18:36.671000 audit[1473]: CRED_ACQ pid=1473 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:18:36.671000 audit[1473]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd0919cd80 a2=3 a3=0 items=0 ppid=1 pid=1473 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:18:36.671000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:18:36.672434 sshd[1473]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:18:36.676720 systemd-logind[1295]: New session 7 of user core. Dec 13 14:18:36.677594 systemd[1]: Started session-7.scope. Dec 13 14:18:36.681000 audit[1473]: USER_START pid=1473 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:18:36.682000 audit[1477]: CRED_ACQ pid=1477 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:18:36.730000 audit[1478]: USER_ACCT pid=1478 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:18:36.730000 audit[1478]: CRED_REFR pid=1478 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:18:36.731908 sudo[1478]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 14:18:36.732125 sudo[1478]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:18:36.732000 audit[1478]: USER_START pid=1478 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:18:36.818616 systemd[1]: Starting docker.service... Dec 13 14:18:36.946418 env[1489]: time="2024-12-13T14:18:36.946243205Z" level=info msg="Starting up" Dec 13 14:18:36.948156 env[1489]: time="2024-12-13T14:18:36.948123130Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:18:36.948156 env[1489]: time="2024-12-13T14:18:36.948145653Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:18:36.948253 env[1489]: time="2024-12-13T14:18:36.948177202Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 14:18:36.948253 env[1489]: time="2024-12-13T14:18:36.948194745Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:18:36.953492 env[1489]: time="2024-12-13T14:18:36.953437026Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:18:36.953492 env[1489]: time="2024-12-13T14:18:36.953457875Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:18:36.953492 env[1489]: time="2024-12-13T14:18:36.953474025Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 14:18:36.953492 env[1489]: time="2024-12-13T14:18:36.953483573Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:18:37.305916 env[1489]: time="2024-12-13T14:18:37.305781968Z" level=warning msg="Your kernel does not support cgroup blkio weight" Dec 13 14:18:37.305916 env[1489]: time="2024-12-13T14:18:37.305822705Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Dec 13 14:18:37.306253 env[1489]: time="2024-12-13T14:18:37.306220862Z" level=info msg="Loading containers: start." Dec 13 14:18:37.365000 audit[1523]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1523 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:18:37.367041 kernel: kauditd_printk_skb: 164 callbacks suppressed Dec 13 14:18:37.367103 kernel: audit: type=1325 audit(1734099517.365:168): table=nat:2 family=2 entries=2 op=nft_register_chain pid=1523 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:18:37.365000 audit[1523]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe7ee9fec0 a2=0 a3=7ffe7ee9feac items=0 ppid=1489 pid=1523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:18:37.373807 kernel: audit: type=1300 audit(1734099517.365:168): arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe7ee9fec0 a2=0 a3=7ffe7ee9feac items=0 ppid=1489 pid=1523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:18:37.373869 kernel: audit: type=1327 audit(1734099517.365:168): proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 13 14:18:37.365000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 13 14:18:37.375707 kernel: audit: type=1325 audit(1734099517.366:169): table=filter:3 family=2 entries=2 op=nft_register_chain pid=1525 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:18:37.366000 audit[1525]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1525 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:18:37.377944 kernel: audit: type=1300 audit(1734099517.366:169): arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffeca6399d0 a2=0 a3=7ffeca6399bc items=0 ppid=1489 pid=1525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:18:37.366000 audit[1525]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffeca6399d0 a2=0 a3=7ffeca6399bc items=0 ppid=1489 pid=1525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:18:37.382540 kernel: audit: type=1327 audit(1734099517.366:169): proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 13 14:18:37.366000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 13 14:18:37.384613 kernel: audit: type=1325 audit(1734099517.369:170): table=filter:4 family=2 entries=1 op=nft_register_chain pid=1527 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:18:37.369000 audit[1527]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1527 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:18:37.369000 audit[1527]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe18b7e4a0 a2=0 a3=7ffe18b7e48c items=0 ppid=1489 pid=1527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:18:37.392062 kernel: audit: type=1300 audit(1734099517.369:170): arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe18b7e4a0 a2=0 a3=7ffe18b7e48c items=0 ppid=1489 pid=1527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:18:37.392158 kernel: audit: type=1327 audit(1734099517.369:170): proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 13 14:18:37.369000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 13 14:18:37.394931 kernel: audit: type=1325 audit(1734099517.369:171): table=filter:5 family=2 entries=1 op=nft_register_chain pid=1529 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:18:37.369000 audit[1529]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1529 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:18:37.369000 audit[1529]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffeee481750 a2=0 a3=7ffeee48173c items=0 ppid=1489 pid=1529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:18:37.369000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 13 14:18:37.369000 audit[1531]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1531 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:18:37.369000 audit[1531]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffcee952040 a2=0 a3=7ffcee95202c items=0 ppid=1489 pid=1531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:18:37.369000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Dec 13 14:18:37.416000 audit[1536]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1536 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:18:37.416000 audit[1536]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fffc00dabd0 a2=0 a3=7fffc00dabbc items=0 ppid=1489 pid=1536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:18:37.416000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Dec 13 14:18:37.428000 audit[1538]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1538 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:18:37.428000 audit[1538]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc0c8d8750 a2=0 a3=7ffc0c8d873c items=0 ppid=1489 pid=1538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:18:37.428000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 13 14:18:37.431000 audit[1540]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1540 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:18:37.431000 audit[1540]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffcdb246cd0 a2=0 a3=7ffcdb246cbc items=0 ppid=1489 pid=1540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:18:37.431000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 13 14:18:37.432000 audit[1542]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1542 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:18:37.432000 audit[1542]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffd5241fde0 a2=0 a3=7ffd5241fdcc items=0 ppid=1489 pid=1542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:18:37.432000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 13 14:18:37.443000 audit[1546]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1546 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:18:37.443000 audit[1546]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffe5abd2070 a2=0 a3=7ffe5abd205c items=0 ppid=1489 pid=1546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:18:37.443000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Dec 13 14:18:37.449000 audit[1547]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1547 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:18:37.449000 audit[1547]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffc90cb9e80 a2=0 a3=7ffc90cb9e6c items=0 ppid=1489 pid=1547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:18:37.449000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 13 14:18:37.459035 kernel: Initializing XFRM netlink socket Dec 13 14:18:37.488328 env[1489]: time="2024-12-13T14:18:37.488265290Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 14:18:37.503000 audit[1555]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1555 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:18:37.503000 audit[1555]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffe823d4cd0 a2=0 a3=7ffe823d4cbc items=0 ppid=1489 pid=1555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:18:37.503000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 13 14:18:37.513000 audit[1558]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1558 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:18:37.513000 audit[1558]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffe0d5c3830 a2=0 a3=7ffe0d5c381c items=0 ppid=1489 pid=1558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:18:37.513000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 13 14:18:37.517000 audit[1561]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1561 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:18:37.517000 audit[1561]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffe58187240 a2=0 a3=7ffe5818722c items=0 ppid=1489 pid=1561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:18:37.517000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Dec 13 14:18:37.519000 audit[1563]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1563 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:18:37.519000 audit[1563]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffe7b992250 a2=0 a3=7ffe7b99223c items=0 ppid=1489 pid=1563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:18:37.519000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Dec 13 14:18:37.521000 audit[1565]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1565 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:18:37.521000 audit[1565]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffc2e964f70 a2=0 a3=7ffc2e964f5c items=0 ppid=1489 pid=1565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:18:37.521000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 13 14:18:37.523000 audit[1567]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1567 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:18:37.523000 audit[1567]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffd70941340 a2=0 a3=7ffd7094132c items=0 ppid=1489 pid=1567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:18:37.523000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 13 14:18:37.525000 audit[1569]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1569 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:18:37.525000 audit[1569]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffce2593db0 a2=0 a3=7ffce2593d9c items=0 ppid=1489 pid=1569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:18:37.525000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Dec 13 14:18:37.532000 audit[1572]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1572 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:18:37.532000 audit[1572]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7fff707ce5d0 a2=0 a3=7fff707ce5bc items=0 ppid=1489 pid=1572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:18:37.532000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 13 14:18:37.534000 audit[1574]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1574 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:18:37.534000 audit[1574]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffd61bc3550 a2=0 a3=7ffd61bc353c items=0 ppid=1489 pid=1574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:18:37.534000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 13 14:18:37.535000 audit[1576]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1576 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:18:37.535000 audit[1576]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffec870d6d0 a2=0 a3=7ffec870d6bc items=0 ppid=1489 pid=1576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:18:37.535000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 13 14:18:37.537000 audit[1578]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1578 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:18:37.537000 audit[1578]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc99cf1680 a2=0 a3=7ffc99cf166c items=0 ppid=1489 pid=1578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:18:37.537000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 13 14:18:37.539067 systemd-networkd[1087]: docker0: Link UP Dec 13 14:18:37.548000 audit[1582]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1582 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:18:37.548000 audit[1582]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd472ba880 a2=0 a3=7ffd472ba86c items=0 ppid=1489 pid=1582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:18:37.548000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Dec 13 14:18:37.554000 audit[1583]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1583 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:18:37.554000 audit[1583]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd9904ad90 a2=0 a3=7ffd9904ad7c items=0 ppid=1489 pid=1583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:18:37.554000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 13 14:18:37.556265 env[1489]: time="2024-12-13T14:18:37.556214880Z" level=info msg="Loading containers: done." Dec 13 14:18:37.652805 env[1489]: time="2024-12-13T14:18:37.652739858Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 14:18:37.653001 env[1489]: time="2024-12-13T14:18:37.652956274Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 14:18:37.653105 env[1489]: time="2024-12-13T14:18:37.653083052Z" level=info msg="Daemon has completed initialization" Dec 13 14:18:37.675779 systemd[1]: Started docker.service. Dec 13 14:18:37.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:37.680091 env[1489]: time="2024-12-13T14:18:37.680027813Z" level=info msg="API listen on /run/docker.sock" Dec 13 14:18:38.808484 env[1311]: time="2024-12-13T14:18:38.808425541Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 14:18:39.588709 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2231741908.mount: Deactivated successfully. Dec 13 14:18:43.217046 env[1311]: time="2024-12-13T14:18:43.216948145Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:43.219115 env[1311]: time="2024-12-13T14:18:43.219048664Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:43.221257 env[1311]: time="2024-12-13T14:18:43.221209056Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:43.223001 env[1311]: time="2024-12-13T14:18:43.222941805Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:43.223710 env[1311]: time="2024-12-13T14:18:43.223640806Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 14:18:43.236482 env[1311]: time="2024-12-13T14:18:43.236440460Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 14:18:46.102225 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 14:18:46.109884 kernel: kauditd_printk_skb: 63 callbacks suppressed Dec 13 14:18:46.109925 kernel: audit: type=1130 audit(1734099526.101:193): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:46.109951 kernel: audit: type=1131 audit(1734099526.101:194): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:46.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:46.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:46.102419 systemd[1]: Stopped kubelet.service. Dec 13 14:18:46.104105 systemd[1]: Starting kubelet.service... Dec 13 14:18:46.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:46.235148 kernel: audit: type=1130 audit(1734099526.230:195): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:46.231117 systemd[1]: Started kubelet.service. Dec 13 14:18:46.380230 kubelet[1640]: E1213 14:18:46.379800 1640 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:18:46.383437 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:18:46.383580 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:18:46.382000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 14:18:46.388051 kernel: audit: type=1131 audit(1734099526.382:196): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 14:18:47.328509 env[1311]: time="2024-12-13T14:18:47.328442765Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:47.331460 env[1311]: time="2024-12-13T14:18:47.331265919Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:47.335265 env[1311]: time="2024-12-13T14:18:47.335217741Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:47.338301 env[1311]: time="2024-12-13T14:18:47.338268622Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:47.339066 env[1311]: time="2024-12-13T14:18:47.338992820Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 14:18:47.351450 env[1311]: time="2024-12-13T14:18:47.351397213Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 14:18:50.493305 env[1311]: time="2024-12-13T14:18:50.493206617Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:50.496980 env[1311]: time="2024-12-13T14:18:50.496894553Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:50.499685 env[1311]: time="2024-12-13T14:18:50.499620866Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:50.501797 env[1311]: time="2024-12-13T14:18:50.501742585Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:50.502512 env[1311]: time="2024-12-13T14:18:50.502471312Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 14:18:50.517422 env[1311]: time="2024-12-13T14:18:50.517362368Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 14:18:51.939672 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4092683801.mount: Deactivated successfully. Dec 13 14:18:52.767355 env[1311]: time="2024-12-13T14:18:52.767277712Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:52.769052 env[1311]: time="2024-12-13T14:18:52.769027323Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:52.770648 env[1311]: time="2024-12-13T14:18:52.770617255Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:52.773295 env[1311]: time="2024-12-13T14:18:52.773228822Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:52.774043 env[1311]: time="2024-12-13T14:18:52.773976094Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 14:18:52.799410 env[1311]: time="2024-12-13T14:18:52.799363204Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 14:18:54.009850 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2709433792.mount: Deactivated successfully. Dec 13 14:18:55.688441 env[1311]: time="2024-12-13T14:18:55.688335187Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:55.725389 env[1311]: time="2024-12-13T14:18:55.725344393Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:55.781785 env[1311]: time="2024-12-13T14:18:55.781735757Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:55.797498 env[1311]: time="2024-12-13T14:18:55.797464263Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:55.798342 env[1311]: time="2024-12-13T14:18:55.798308026Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 14:18:55.812585 env[1311]: time="2024-12-13T14:18:55.812536309Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 14:18:56.387887 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 14:18:56.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:56.388036 systemd[1]: Stopped kubelet.service. Dec 13 14:18:56.389588 systemd[1]: Starting kubelet.service... Dec 13 14:18:56.395346 kernel: audit: type=1130 audit(1734099536.387:197): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:56.395496 kernel: audit: type=1131 audit(1734099536.387:198): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:56.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:56.395370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount281243267.mount: Deactivated successfully. Dec 13 14:18:56.468695 env[1311]: time="2024-12-13T14:18:56.468622745Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:56.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:56.476030 systemd[1]: Started kubelet.service. Dec 13 14:18:56.481059 kernel: audit: type=1130 audit(1734099536.475:199): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:18:56.584731 env[1311]: time="2024-12-13T14:18:56.584667220Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:56.586396 env[1311]: time="2024-12-13T14:18:56.586351308Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:56.588006 env[1311]: time="2024-12-13T14:18:56.587942452Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:18:56.588667 env[1311]: time="2024-12-13T14:18:56.588598012Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 14:18:56.602357 env[1311]: time="2024-12-13T14:18:56.602306340Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 14:18:56.684098 kubelet[1683]: E1213 14:18:56.683888 1683 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:18:56.686311 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:18:56.686530 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:18:56.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 14:18:56.691045 kernel: audit: type=1131 audit(1734099536.685:200): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 14:18:57.162722 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount901014097.mount: Deactivated successfully. Dec 13 14:19:01.788191 env[1311]: time="2024-12-13T14:19:01.788115753Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:01.828382 env[1311]: time="2024-12-13T14:19:01.828297428Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:01.839088 env[1311]: time="2024-12-13T14:19:01.839004979Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:01.858660 env[1311]: time="2024-12-13T14:19:01.858575350Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:01.859522 env[1311]: time="2024-12-13T14:19:01.859462464Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 14:19:03.977722 systemd[1]: Stopped kubelet.service. Dec 13 14:19:03.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:03.978000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:03.981705 systemd[1]: Starting kubelet.service... Dec 13 14:19:03.984718 kernel: audit: type=1130 audit(1734099543.976:201): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:03.984789 kernel: audit: type=1131 audit(1734099543.978:202): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:03.999249 systemd[1]: Reloading. Dec 13 14:19:04.058889 /usr/lib/systemd/system-generators/torcx-generator[1800]: time="2024-12-13T14:19:04Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:19:04.058922 /usr/lib/systemd/system-generators/torcx-generator[1800]: time="2024-12-13T14:19:04Z" level=info msg="torcx already run" Dec 13 14:19:04.252797 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:19:04.252820 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:19:04.280141 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:19:04.372358 systemd[1]: Started kubelet.service. Dec 13 14:19:04.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:04.377060 kernel: audit: type=1130 audit(1734099544.371:203): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:04.379434 systemd[1]: Stopping kubelet.service... Dec 13 14:19:04.380674 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:19:04.381001 systemd[1]: Stopped kubelet.service. Dec 13 14:19:04.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:04.383157 systemd[1]: Starting kubelet.service... Dec 13 14:19:04.387859 kernel: audit: type=1131 audit(1734099544.380:204): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:04.469118 systemd[1]: Started kubelet.service. Dec 13 14:19:04.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:04.476151 kernel: audit: type=1130 audit(1734099544.469:205): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:04.515599 kubelet[1864]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:19:04.515599 kubelet[1864]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:19:04.515599 kubelet[1864]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:19:04.516258 kubelet[1864]: I1213 14:19:04.515572 1864 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:19:04.868588 kubelet[1864]: I1213 14:19:04.868457 1864 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 14:19:04.868588 kubelet[1864]: I1213 14:19:04.868496 1864 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:19:04.868773 kubelet[1864]: I1213 14:19:04.868762 1864 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 14:19:04.914839 kubelet[1864]: E1213 14:19:04.914778 1864 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.24:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.24:6443: connect: connection refused Dec 13 14:19:04.916244 kubelet[1864]: I1213 14:19:04.916221 1864 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:19:04.968173 kubelet[1864]: I1213 14:19:04.968138 1864 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:19:04.969421 kubelet[1864]: I1213 14:19:04.969398 1864 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:19:04.969610 kubelet[1864]: I1213 14:19:04.969590 1864 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:19:04.969713 kubelet[1864]: I1213 14:19:04.969626 1864 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:19:04.969713 kubelet[1864]: I1213 14:19:04.969635 1864 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:19:04.969766 kubelet[1864]: I1213 14:19:04.969757 1864 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:19:04.969893 kubelet[1864]: I1213 14:19:04.969880 1864 kubelet.go:396] "Attempting to sync node with API server" Dec 13 14:19:04.969952 kubelet[1864]: I1213 14:19:04.969900 1864 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:19:04.969952 kubelet[1864]: I1213 14:19:04.969931 1864 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:19:04.969952 kubelet[1864]: I1213 14:19:04.969951 1864 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:19:04.970557 kubelet[1864]: W1213 14:19:04.970507 1864 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Dec 13 14:19:04.970599 kubelet[1864]: E1213 14:19:04.970565 1864 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Dec 13 14:19:04.970844 kubelet[1864]: W1213 14:19:04.970799 1864 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.24:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Dec 13 14:19:04.970872 kubelet[1864]: E1213 14:19:04.970844 1864 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.24:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Dec 13 14:19:04.972992 kubelet[1864]: I1213 14:19:04.972968 1864 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:19:04.995879 kubelet[1864]: I1213 14:19:04.995836 1864 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:19:04.996762 kubelet[1864]: W1213 14:19:04.996736 1864 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 14:19:04.997342 kubelet[1864]: I1213 14:19:04.997317 1864 server.go:1256] "Started kubelet" Dec 13 14:19:04.997455 kubelet[1864]: I1213 14:19:04.997429 1864 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:19:04.997636 kubelet[1864]: I1213 14:19:04.997606 1864 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:19:04.997983 kubelet[1864]: I1213 14:19:04.997956 1864 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:19:04.998340 kubelet[1864]: I1213 14:19:04.998310 1864 server.go:461] "Adding debug handlers to kubelet server" Dec 13 14:19:04.998000 audit[1864]: AVC avc: denied { mac_admin } for pid=1864 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:04.999794 kubelet[1864]: I1213 14:19:04.999725 1864 kubelet.go:1417] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Dec 13 14:19:04.999794 kubelet[1864]: I1213 14:19:04.999762 1864 kubelet.go:1421] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Dec 13 14:19:04.999861 kubelet[1864]: I1213 14:19:04.999853 1864 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:19:04.998000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:19:05.004824 kernel: audit: type=1400 audit(1734099544.998:206): avc: denied { mac_admin } for pid=1864 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:05.004876 kernel: audit: type=1401 audit(1734099544.998:206): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:19:05.004893 kernel: audit: type=1300 audit(1734099544.998:206): arch=c000003e syscall=188 success=no exit=-22 a0=c000c666c0 a1=c000d08390 a2=c000c66690 a3=25 items=0 ppid=1 pid=1864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:04.998000 audit[1864]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000c666c0 a1=c000d08390 a2=c000c66690 a3=25 items=0 ppid=1 pid=1864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:05.005674 kubelet[1864]: I1213 14:19:05.005655 1864 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:19:05.009755 kernel: audit: type=1327 audit(1734099544.998:206): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:19:04.998000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:19:04.998000 audit[1864]: AVC avc: denied { mac_admin } for pid=1864 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:05.027271 kernel: audit: type=1400 audit(1734099544.998:207): avc: denied { mac_admin } for pid=1864 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:04.998000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:19:04.998000 audit[1864]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0009d4760 a1=c000d083a8 a2=c000c66750 a3=25 items=0 ppid=1 pid=1864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:04.998000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:19:04.998000 audit[1876]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1876 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:19:04.998000 audit[1876]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffdb9024810 a2=0 a3=7ffdb90247fc items=0 ppid=1864 pid=1876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:04.998000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 13 14:19:05.002000 audit[1877]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1877 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:19:05.002000 audit[1877]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd7b823300 a2=0 a3=7ffd7b8232ec items=0 ppid=1864 pid=1877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:05.002000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 13 14:19:05.036548 kubelet[1864]: E1213 14:19:05.035849 1864 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.24:6443: connect: connection refused" interval="200ms" Dec 13 14:19:05.036548 kubelet[1864]: I1213 14:19:05.035921 1864 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 14:19:05.036548 kubelet[1864]: W1213 14:19:05.036255 1864 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Dec 13 14:19:05.036548 kubelet[1864]: E1213 14:19:05.036301 1864 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Dec 13 14:19:05.035000 audit[1879]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1879 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:19:05.035000 audit[1879]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffedcaa0860 a2=0 a3=7ffedcaa084c items=0 ppid=1864 pid=1879 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:05.035000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 14:19:05.038002 kubelet[1864]: I1213 14:19:05.037974 1864 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 14:19:05.037000 audit[1881]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1881 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:19:05.037000 audit[1881]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffdb6337e60 a2=0 a3=7ffdb6337e4c items=0 ppid=1864 pid=1881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:05.037000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 14:19:05.044648 kubelet[1864]: I1213 14:19:05.044409 1864 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:19:05.044648 kubelet[1864]: I1213 14:19:05.044512 1864 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:19:05.049048 kubelet[1864]: E1213 14:19:05.049005 1864 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:19:05.049292 kubelet[1864]: I1213 14:19:05.049279 1864 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:19:05.048000 audit[1885]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1885 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:19:05.048000 audit[1885]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffd90ebe1c0 a2=0 a3=7ffd90ebe1ac items=0 ppid=1864 pid=1885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:05.048000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 13 14:19:05.050206 kubelet[1864]: I1213 14:19:05.050185 1864 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:19:05.049000 audit[1886]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1886 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:19:05.049000 audit[1886]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe27cbf400 a2=0 a3=7ffe27cbf3ec items=0 ppid=1864 pid=1886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:05.049000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 13 14:19:05.051335 kubelet[1864]: I1213 14:19:05.051156 1864 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:19:05.051335 kubelet[1864]: I1213 14:19:05.051201 1864 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:19:05.051335 kubelet[1864]: I1213 14:19:05.051228 1864 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 14:19:05.051335 kubelet[1864]: E1213 14:19:05.051276 1864 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:19:05.051000 audit[1887]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1887 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:19:05.051000 audit[1887]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe64f98500 a2=0 a3=7ffe64f984ec items=0 ppid=1864 pid=1887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:05.051000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 13 14:19:05.052000 audit[1888]: NETFILTER_CFG table=nat:33 family=2 entries=1 op=nft_register_chain pid=1888 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:19:05.052000 audit[1888]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff1e752870 a2=0 a3=7fff1e75285c items=0 ppid=1864 pid=1888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:05.052000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 13 14:19:05.053000 audit[1889]: NETFILTER_CFG table=filter:34 family=2 entries=1 op=nft_register_chain pid=1889 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:19:05.053000 audit[1889]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff82d10430 a2=0 a3=7fff82d1041c items=0 ppid=1864 pid=1889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:05.053000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 13 14:19:05.054476 kubelet[1864]: E1213 14:19:05.054350 1864 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.24:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.24:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1810c25938bae509 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 14:19:04.997291273 +0000 UTC m=+0.524172662,LastTimestamp:2024-12-13 14:19:04.997291273 +0000 UTC m=+0.524172662,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 14:19:05.055131 kubelet[1864]: W1213 14:19:05.055082 1864 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Dec 13 14:19:05.054000 audit[1890]: NETFILTER_CFG table=mangle:35 family=10 entries=1 op=nft_register_chain pid=1890 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:19:05.054000 audit[1890]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc6df315f0 a2=0 a3=7ffc6df315dc items=0 ppid=1864 pid=1890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:05.054000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 13 14:19:05.055305 kubelet[1864]: E1213 14:19:05.055134 1864 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Dec 13 14:19:05.054000 audit[1891]: NETFILTER_CFG table=nat:36 family=10 entries=2 op=nft_register_chain pid=1891 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:19:05.054000 audit[1891]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffde618e000 a2=0 a3=7ffde618dfec items=0 ppid=1864 pid=1891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:05.054000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 13 14:19:05.055000 audit[1892]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1892 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:19:05.055000 audit[1892]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffea4d918a0 a2=0 a3=7ffea4d9188c items=0 ppid=1864 pid=1892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:05.055000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 13 14:19:05.065752 kubelet[1864]: I1213 14:19:05.065712 1864 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:19:05.065752 kubelet[1864]: I1213 14:19:05.065744 1864 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:19:05.065874 kubelet[1864]: I1213 14:19:05.065770 1864 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:19:05.107600 kubelet[1864]: I1213 14:19:05.107565 1864 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 14:19:05.107977 kubelet[1864]: E1213 14:19:05.107945 1864 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.24:6443/api/v1/nodes\": dial tcp 10.0.0.24:6443: connect: connection refused" node="localhost" Dec 13 14:19:05.151499 kubelet[1864]: E1213 14:19:05.151352 1864 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 14:19:05.237169 kubelet[1864]: E1213 14:19:05.237126 1864 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.24:6443: connect: connection refused" interval="400ms" Dec 13 14:19:05.309546 kubelet[1864]: I1213 14:19:05.309503 1864 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 14:19:05.309830 kubelet[1864]: E1213 14:19:05.309799 1864 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.24:6443/api/v1/nodes\": dial tcp 10.0.0.24:6443: connect: connection refused" node="localhost" Dec 13 14:19:05.352488 kubelet[1864]: E1213 14:19:05.352420 1864 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 14:19:05.638563 kubelet[1864]: E1213 14:19:05.638493 1864 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.24:6443: connect: connection refused" interval="800ms" Dec 13 14:19:05.712001 kubelet[1864]: I1213 14:19:05.711958 1864 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 14:19:05.712372 kubelet[1864]: E1213 14:19:05.712332 1864 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.24:6443/api/v1/nodes\": dial tcp 10.0.0.24:6443: connect: connection refused" node="localhost" Dec 13 14:19:05.753029 kubelet[1864]: E1213 14:19:05.752959 1864 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 14:19:05.994831 kubelet[1864]: I1213 14:19:05.994756 1864 policy_none.go:49] "None policy: Start" Dec 13 14:19:05.995710 kubelet[1864]: I1213 14:19:05.995660 1864 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:19:05.995710 kubelet[1864]: I1213 14:19:05.995725 1864 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:19:06.025039 kubelet[1864]: W1213 14:19:06.024938 1864 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.24:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Dec 13 14:19:06.025039 kubelet[1864]: E1213 14:19:06.025034 1864 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.24:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Dec 13 14:19:06.332000 audit[1864]: AVC avc: denied { mac_admin } for pid=1864 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:06.332000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:19:06.332000 audit[1864]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000a7e9f0 a1=c0010bf0f8 a2=c000a7e9c0 a3=25 items=0 ppid=1 pid=1864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:06.332000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:19:06.334142 kubelet[1864]: I1213 14:19:06.333580 1864 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:19:06.334142 kubelet[1864]: I1213 14:19:06.333725 1864 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Dec 13 14:19:06.335428 kubelet[1864]: I1213 14:19:06.335372 1864 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:19:06.335732 kubelet[1864]: E1213 14:19:06.335693 1864 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 14:19:06.439504 kubelet[1864]: E1213 14:19:06.439455 1864 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.24:6443: connect: connection refused" interval="1.6s" Dec 13 14:19:06.514090 kubelet[1864]: I1213 14:19:06.514035 1864 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 14:19:06.514488 kubelet[1864]: E1213 14:19:06.514451 1864 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.24:6443/api/v1/nodes\": dial tcp 10.0.0.24:6443: connect: connection refused" node="localhost" Dec 13 14:19:06.517126 kubelet[1864]: W1213 14:19:06.517068 1864 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Dec 13 14:19:06.517126 kubelet[1864]: E1213 14:19:06.517123 1864 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Dec 13 14:19:06.535837 kubelet[1864]: W1213 14:19:06.535714 1864 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Dec 13 14:19:06.535837 kubelet[1864]: E1213 14:19:06.535791 1864 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Dec 13 14:19:06.554247 kubelet[1864]: I1213 14:19:06.554181 1864 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 14:19:06.555607 kubelet[1864]: I1213 14:19:06.555577 1864 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 14:19:06.556418 kubelet[1864]: I1213 14:19:06.556403 1864 topology_manager.go:215] "Topology Admit Handler" podUID="6129104aa4dc311e55f3bc2fd866fd06" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 14:19:06.565336 kubelet[1864]: W1213 14:19:06.565273 1864 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Dec 13 14:19:06.565336 kubelet[1864]: E1213 14:19:06.565334 1864 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.24:6443: connect: connection refused Dec 13 14:19:06.648135 kubelet[1864]: I1213 14:19:06.647969 1864 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:19:06.648135 kubelet[1864]: I1213 14:19:06.648053 1864 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:19:06.648135 kubelet[1864]: I1213 14:19:06.648085 1864 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Dec 13 14:19:06.648695 kubelet[1864]: I1213 14:19:06.648225 1864 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6129104aa4dc311e55f3bc2fd866fd06-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6129104aa4dc311e55f3bc2fd866fd06\") " pod="kube-system/kube-apiserver-localhost" Dec 13 14:19:06.648695 kubelet[1864]: I1213 14:19:06.648286 1864 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6129104aa4dc311e55f3bc2fd866fd06-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6129104aa4dc311e55f3bc2fd866fd06\") " pod="kube-system/kube-apiserver-localhost" Dec 13 14:19:06.648695 kubelet[1864]: I1213 14:19:06.648315 1864 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:19:06.648695 kubelet[1864]: I1213 14:19:06.648345 1864 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:19:06.648695 kubelet[1864]: I1213 14:19:06.648367 1864 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:19:06.648846 kubelet[1864]: I1213 14:19:06.648387 1864 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6129104aa4dc311e55f3bc2fd866fd06-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6129104aa4dc311e55f3bc2fd866fd06\") " pod="kube-system/kube-apiserver-localhost" Dec 13 14:19:06.860500 kubelet[1864]: E1213 14:19:06.860442 1864 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:06.861096 kubelet[1864]: E1213 14:19:06.861054 1864 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:06.861438 env[1311]: time="2024-12-13T14:19:06.861388909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,}" Dec 13 14:19:06.861733 env[1311]: time="2024-12-13T14:19:06.861660797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,}" Dec 13 14:19:06.861888 kubelet[1864]: E1213 14:19:06.861867 1864 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:06.862255 env[1311]: time="2024-12-13T14:19:06.862226265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6129104aa4dc311e55f3bc2fd866fd06,Namespace:kube-system,Attempt:0,}" Dec 13 14:19:06.926456 kubelet[1864]: E1213 14:19:06.926331 1864 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.24:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.24:6443: connect: connection refused Dec 13 14:19:07.473978 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1787486450.mount: Deactivated successfully. Dec 13 14:19:07.478952 env[1311]: time="2024-12-13T14:19:07.478901673Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:07.481489 env[1311]: time="2024-12-13T14:19:07.481462486Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:07.483138 env[1311]: time="2024-12-13T14:19:07.483096161Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:07.484096 env[1311]: time="2024-12-13T14:19:07.484061653Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:07.485917 env[1311]: time="2024-12-13T14:19:07.485886810Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:07.487183 env[1311]: time="2024-12-13T14:19:07.487138004Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:07.488565 env[1311]: time="2024-12-13T14:19:07.488522898Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:07.490346 env[1311]: time="2024-12-13T14:19:07.490315171Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:07.493095 env[1311]: time="2024-12-13T14:19:07.493062104Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:07.494362 env[1311]: time="2024-12-13T14:19:07.494333648Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:07.495789 env[1311]: time="2024-12-13T14:19:07.495736106Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:07.496853 env[1311]: time="2024-12-13T14:19:07.496816420Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:07.525076 env[1311]: time="2024-12-13T14:19:07.524879633Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:19:07.525076 env[1311]: time="2024-12-13T14:19:07.524916224Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:19:07.525076 env[1311]: time="2024-12-13T14:19:07.524925603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:19:07.525293 env[1311]: time="2024-12-13T14:19:07.525098006Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/78fc1b3f4563808699d5a6d4dba1f47e4030bc59c67fcd349113a8af4441fd53 pid=1905 runtime=io.containerd.runc.v2 Dec 13 14:19:07.538801 env[1311]: time="2024-12-13T14:19:07.538598418Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:19:07.538801 env[1311]: time="2024-12-13T14:19:07.538642003Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:19:07.538801 env[1311]: time="2024-12-13T14:19:07.538652654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:19:07.538990 env[1311]: time="2024-12-13T14:19:07.538838393Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a57b40ca9792c5c71983987b9eb43470c36417b771ace484f32c8ac297bae86d pid=1922 runtime=io.containerd.runc.v2 Dec 13 14:19:07.539603 env[1311]: time="2024-12-13T14:19:07.539157412Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:19:07.539603 env[1311]: time="2024-12-13T14:19:07.539181940Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:19:07.539603 env[1311]: time="2024-12-13T14:19:07.539191357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:19:07.539717 env[1311]: time="2024-12-13T14:19:07.539672901Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/efaf07ceb86298c385fc3d188376cf9d94edf8545e3f668d88e5596e94fb5d98 pid=1929 runtime=io.containerd.runc.v2 Dec 13 14:19:07.651882 env[1311]: time="2024-12-13T14:19:07.651681301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6129104aa4dc311e55f3bc2fd866fd06,Namespace:kube-system,Attempt:0,} returns sandbox id \"78fc1b3f4563808699d5a6d4dba1f47e4030bc59c67fcd349113a8af4441fd53\"" Dec 13 14:19:07.655641 kubelet[1864]: E1213 14:19:07.655602 1864 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:07.658470 env[1311]: time="2024-12-13T14:19:07.658429989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,} returns sandbox id \"a57b40ca9792c5c71983987b9eb43470c36417b771ace484f32c8ac297bae86d\"" Dec 13 14:19:07.659156 kubelet[1864]: E1213 14:19:07.659129 1864 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:07.659677 env[1311]: time="2024-12-13T14:19:07.659649512Z" level=info msg="CreateContainer within sandbox \"78fc1b3f4563808699d5a6d4dba1f47e4030bc59c67fcd349113a8af4441fd53\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 14:19:07.662259 env[1311]: time="2024-12-13T14:19:07.661817644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,} returns sandbox id \"efaf07ceb86298c385fc3d188376cf9d94edf8545e3f668d88e5596e94fb5d98\"" Dec 13 14:19:07.662910 kubelet[1864]: E1213 14:19:07.662879 1864 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:07.663215 env[1311]: time="2024-12-13T14:19:07.663188200Z" level=info msg="CreateContainer within sandbox \"a57b40ca9792c5c71983987b9eb43470c36417b771ace484f32c8ac297bae86d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 14:19:07.664410 env[1311]: time="2024-12-13T14:19:07.664387875Z" level=info msg="CreateContainer within sandbox \"efaf07ceb86298c385fc3d188376cf9d94edf8545e3f668d88e5596e94fb5d98\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 14:19:07.684631 env[1311]: time="2024-12-13T14:19:07.684575927Z" level=info msg="CreateContainer within sandbox \"78fc1b3f4563808699d5a6d4dba1f47e4030bc59c67fcd349113a8af4441fd53\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"23264b56af7a85bdd1199b83a10e9ff5e6c59713a01dc84f93707462ddf82896\"" Dec 13 14:19:07.685832 env[1311]: time="2024-12-13T14:19:07.685772867Z" level=info msg="StartContainer for \"23264b56af7a85bdd1199b83a10e9ff5e6c59713a01dc84f93707462ddf82896\"" Dec 13 14:19:07.696931 env[1311]: time="2024-12-13T14:19:07.696838921Z" level=info msg="CreateContainer within sandbox \"efaf07ceb86298c385fc3d188376cf9d94edf8545e3f668d88e5596e94fb5d98\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b69410abdff5fe453943a4243950209a19ed9f5cea56b8f78f17bbfbeb2686eb\"" Dec 13 14:19:07.697704 env[1311]: time="2024-12-13T14:19:07.697668329Z" level=info msg="StartContainer for \"b69410abdff5fe453943a4243950209a19ed9f5cea56b8f78f17bbfbeb2686eb\"" Dec 13 14:19:07.699113 env[1311]: time="2024-12-13T14:19:07.699073391Z" level=info msg="CreateContainer within sandbox \"a57b40ca9792c5c71983987b9eb43470c36417b771ace484f32c8ac297bae86d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"244d4cbf2ff0c65efc02d63b6390da47faa8b9272411984be204e9f6bbb4979b\"" Dec 13 14:19:07.699717 env[1311]: time="2024-12-13T14:19:07.699671812Z" level=info msg="StartContainer for \"244d4cbf2ff0c65efc02d63b6390da47faa8b9272411984be204e9f6bbb4979b\"" Dec 13 14:19:07.765202 env[1311]: time="2024-12-13T14:19:07.765090295Z" level=info msg="StartContainer for \"23264b56af7a85bdd1199b83a10e9ff5e6c59713a01dc84f93707462ddf82896\" returns successfully" Dec 13 14:19:07.786304 env[1311]: time="2024-12-13T14:19:07.786238849Z" level=info msg="StartContainer for \"244d4cbf2ff0c65efc02d63b6390da47faa8b9272411984be204e9f6bbb4979b\" returns successfully" Dec 13 14:19:07.848383 env[1311]: time="2024-12-13T14:19:07.848320957Z" level=info msg="StartContainer for \"b69410abdff5fe453943a4243950209a19ed9f5cea56b8f78f17bbfbeb2686eb\" returns successfully" Dec 13 14:19:08.062522 kubelet[1864]: E1213 14:19:08.062189 1864 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:08.064740 kubelet[1864]: E1213 14:19:08.064717 1864 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:08.066926 kubelet[1864]: E1213 14:19:08.066910 1864 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:08.116458 kubelet[1864]: I1213 14:19:08.116432 1864 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 14:19:09.071913 kubelet[1864]: E1213 14:19:09.071827 1864 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:09.466613 kubelet[1864]: E1213 14:19:09.466468 1864 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 13 14:19:09.536321 kubelet[1864]: I1213 14:19:09.536257 1864 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 14:19:09.973780 kubelet[1864]: I1213 14:19:09.973750 1864 apiserver.go:52] "Watching apiserver" Dec 13 14:19:10.037106 kubelet[1864]: I1213 14:19:10.037041 1864 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 14:19:10.065929 kubelet[1864]: E1213 14:19:10.065881 1864 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Dec 13 14:19:10.066266 kubelet[1864]: E1213 14:19:10.066246 1864 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:12.698767 systemd[1]: Reloading. Dec 13 14:19:12.832250 /usr/lib/systemd/system-generators/torcx-generator[2160]: time="2024-12-13T14:19:12Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:19:12.832284 /usr/lib/systemd/system-generators/torcx-generator[2160]: time="2024-12-13T14:19:12Z" level=info msg="torcx already run" Dec 13 14:19:12.924826 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:19:12.924849 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:19:12.952377 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:19:13.053607 kubelet[1864]: I1213 14:19:13.053574 1864 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:19:13.053685 systemd[1]: Stopping kubelet.service... Dec 13 14:19:13.065375 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:19:13.065704 systemd[1]: Stopped kubelet.service. Dec 13 14:19:13.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:13.068085 systemd[1]: Starting kubelet.service... Dec 13 14:19:13.082139 kernel: kauditd_printk_skb: 43 callbacks suppressed Dec 13 14:19:13.082245 kernel: audit: type=1131 audit(1734099553.064:221): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:13.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:13.169740 systemd[1]: Started kubelet.service. Dec 13 14:19:13.214196 kernel: audit: type=1130 audit(1734099553.169:222): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:13.220775 kubelet[2218]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:19:13.221240 kubelet[2218]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:19:13.221240 kubelet[2218]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:19:13.221373 kubelet[2218]: I1213 14:19:13.221289 2218 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:19:13.225389 kubelet[2218]: I1213 14:19:13.225342 2218 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 14:19:13.225389 kubelet[2218]: I1213 14:19:13.225376 2218 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:19:13.225644 kubelet[2218]: I1213 14:19:13.225619 2218 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 14:19:13.227123 kubelet[2218]: I1213 14:19:13.227097 2218 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 14:19:13.229585 kubelet[2218]: I1213 14:19:13.229555 2218 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:19:13.237975 kubelet[2218]: I1213 14:19:13.237881 2218 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:19:13.238795 kubelet[2218]: I1213 14:19:13.238777 2218 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:19:13.239159 kubelet[2218]: I1213 14:19:13.239110 2218 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:19:13.239159 kubelet[2218]: I1213 14:19:13.239154 2218 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:19:13.239159 kubelet[2218]: I1213 14:19:13.239166 2218 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:19:13.239525 kubelet[2218]: I1213 14:19:13.239203 2218 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:19:13.239525 kubelet[2218]: I1213 14:19:13.239318 2218 kubelet.go:396] "Attempting to sync node with API server" Dec 13 14:19:13.239525 kubelet[2218]: I1213 14:19:13.239333 2218 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:19:13.239995 kubelet[2218]: I1213 14:19:13.239836 2218 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:19:13.239995 kubelet[2218]: I1213 14:19:13.239865 2218 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:19:13.240839 kubelet[2218]: I1213 14:19:13.240820 2218 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:19:13.241065 kubelet[2218]: I1213 14:19:13.241006 2218 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:19:13.241471 kubelet[2218]: I1213 14:19:13.241451 2218 server.go:1256] "Started kubelet" Dec 13 14:19:13.242000 audit[2218]: AVC avc: denied { mac_admin } for pid=2218 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:13.245239 kubelet[2218]: I1213 14:19:13.243516 2218 kubelet.go:1417] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Dec 13 14:19:13.245239 kubelet[2218]: I1213 14:19:13.243558 2218 kubelet.go:1421] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Dec 13 14:19:13.245239 kubelet[2218]: I1213 14:19:13.243586 2218 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:19:13.249420 kubelet[2218]: I1213 14:19:13.249392 2218 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:19:13.253148 kubelet[2218]: I1213 14:19:13.253125 2218 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:19:13.253534 kubelet[2218]: I1213 14:19:13.253518 2218 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:19:13.256518 kubelet[2218]: I1213 14:19:13.256480 2218 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:19:13.257008 kubelet[2218]: I1213 14:19:13.256978 2218 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 14:19:13.257104 kubelet[2218]: I1213 14:19:13.257034 2218 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 14:19:13.257779 kubelet[2218]: E1213 14:19:13.257760 2218 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:19:13.259376 kubelet[2218]: I1213 14:19:13.259340 2218 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:19:13.259555 kubelet[2218]: I1213 14:19:13.259487 2218 server.go:461] "Adding debug handlers to kubelet server" Dec 13 14:19:13.266030 kubelet[2218]: I1213 14:19:13.265992 2218 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:19:13.270930 kubelet[2218]: I1213 14:19:13.270907 2218 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:19:13.271027 kubelet[2218]: I1213 14:19:13.270943 2218 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:19:13.271027 kubelet[2218]: I1213 14:19:13.270963 2218 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 14:19:13.271027 kubelet[2218]: E1213 14:19:13.271026 2218 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:19:13.273240 kubelet[2218]: I1213 14:19:13.273209 2218 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:19:13.273240 kubelet[2218]: I1213 14:19:13.273230 2218 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:19:13.276704 kernel: audit: type=1400 audit(1734099553.242:223): avc: denied { mac_admin } for pid=2218 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:13.276853 kernel: audit: type=1401 audit(1734099553.242:223): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:19:13.242000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:19:13.242000 audit[2218]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000b59a40 a1=c00075d9e0 a2=c000b59a10 a3=25 items=0 ppid=1 pid=2218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:13.281781 kernel: audit: type=1300 audit(1734099553.242:223): arch=c000003e syscall=188 success=no exit=-22 a0=c000b59a40 a1=c00075d9e0 a2=c000b59a10 a3=25 items=0 ppid=1 pid=2218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:13.286148 kernel: audit: type=1327 audit(1734099553.242:223): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:19:13.242000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:19:13.242000 audit[2218]: AVC avc: denied { mac_admin } for pid=2218 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:13.289366 kernel: audit: type=1400 audit(1734099553.242:224): avc: denied { mac_admin } for pid=2218 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:13.289426 kernel: audit: type=1401 audit(1734099553.242:224): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:19:13.242000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:19:13.291062 kernel: audit: type=1300 audit(1734099553.242:224): arch=c000003e syscall=188 success=no exit=-22 a0=c000a41460 a1=c00075d9f8 a2=c000b59ad0 a3=25 items=0 ppid=1 pid=2218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:13.242000 audit[2218]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000a41460 a1=c00075d9f8 a2=c000b59ad0 a3=25 items=0 ppid=1 pid=2218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:13.242000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:19:13.299508 kernel: audit: type=1327 audit(1734099553.242:224): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:19:13.316909 kubelet[2218]: I1213 14:19:13.316869 2218 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:19:13.316909 kubelet[2218]: I1213 14:19:13.316898 2218 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:19:13.316909 kubelet[2218]: I1213 14:19:13.316919 2218 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:19:13.317229 kubelet[2218]: I1213 14:19:13.317182 2218 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 14:19:13.317229 kubelet[2218]: I1213 14:19:13.317205 2218 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 14:19:13.317229 kubelet[2218]: I1213 14:19:13.317211 2218 policy_none.go:49] "None policy: Start" Dec 13 14:19:13.318386 kubelet[2218]: I1213 14:19:13.318233 2218 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:19:13.318386 kubelet[2218]: I1213 14:19:13.318286 2218 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:19:13.319312 kubelet[2218]: I1213 14:19:13.319273 2218 state_mem.go:75] "Updated machine memory state" Dec 13 14:19:13.321123 kubelet[2218]: I1213 14:19:13.321056 2218 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:19:13.320000 audit[2218]: AVC avc: denied { mac_admin } for pid=2218 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:13.320000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:19:13.320000 audit[2218]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000ea0db0 a1=c000dcf758 a2=c000ea0d80 a3=25 items=0 ppid=1 pid=2218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:13.320000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:19:13.321647 kubelet[2218]: I1213 14:19:13.321146 2218 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Dec 13 14:19:13.321647 kubelet[2218]: I1213 14:19:13.321405 2218 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:19:13.360958 kubelet[2218]: I1213 14:19:13.360906 2218 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 14:19:13.372108 kubelet[2218]: I1213 14:19:13.372074 2218 topology_manager.go:215] "Topology Admit Handler" podUID="6129104aa4dc311e55f3bc2fd866fd06" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 14:19:13.372196 kubelet[2218]: I1213 14:19:13.372165 2218 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 14:19:13.372196 kubelet[2218]: I1213 14:19:13.372196 2218 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 14:19:13.558171 kubelet[2218]: I1213 14:19:13.558124 2218 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6129104aa4dc311e55f3bc2fd866fd06-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6129104aa4dc311e55f3bc2fd866fd06\") " pod="kube-system/kube-apiserver-localhost" Dec 13 14:19:13.558391 kubelet[2218]: I1213 14:19:13.558231 2218 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6129104aa4dc311e55f3bc2fd866fd06-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6129104aa4dc311e55f3bc2fd866fd06\") " pod="kube-system/kube-apiserver-localhost" Dec 13 14:19:13.558391 kubelet[2218]: I1213 14:19:13.558263 2218 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:19:13.558391 kubelet[2218]: I1213 14:19:13.558281 2218 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:19:13.558391 kubelet[2218]: I1213 14:19:13.558297 2218 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:19:13.558391 kubelet[2218]: I1213 14:19:13.558379 2218 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6129104aa4dc311e55f3bc2fd866fd06-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6129104aa4dc311e55f3bc2fd866fd06\") " pod="kube-system/kube-apiserver-localhost" Dec 13 14:19:13.558573 kubelet[2218]: I1213 14:19:13.558435 2218 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:19:13.558573 kubelet[2218]: I1213 14:19:13.558492 2218 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:19:13.558573 kubelet[2218]: I1213 14:19:13.558567 2218 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Dec 13 14:19:13.654975 kubelet[2218]: I1213 14:19:13.654924 2218 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Dec 13 14:19:13.655393 kubelet[2218]: I1213 14:19:13.655345 2218 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 14:19:13.827065 kubelet[2218]: E1213 14:19:13.826889 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:13.827288 kubelet[2218]: E1213 14:19:13.827241 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:13.827568 kubelet[2218]: E1213 14:19:13.827523 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:14.241208 kubelet[2218]: I1213 14:19:14.241162 2218 apiserver.go:52] "Watching apiserver" Dec 13 14:19:14.257945 kubelet[2218]: I1213 14:19:14.257907 2218 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 14:19:14.289769 kubelet[2218]: E1213 14:19:14.289738 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:14.289981 kubelet[2218]: E1213 14:19:14.289961 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:14.290183 kubelet[2218]: E1213 14:19:14.290104 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:14.348294 kubelet[2218]: I1213 14:19:14.348252 2218 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.3481945 podStartE2EDuration="1.3481945s" podCreationTimestamp="2024-12-13 14:19:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:19:14.347966453 +0000 UTC m=+1.171024366" watchObservedRunningTime="2024-12-13 14:19:14.3481945 +0000 UTC m=+1.171252413" Dec 13 14:19:14.378709 kubelet[2218]: I1213 14:19:14.378665 2218 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.378611343 podStartE2EDuration="1.378611343s" podCreationTimestamp="2024-12-13 14:19:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:19:14.369145023 +0000 UTC m=+1.192202936" watchObservedRunningTime="2024-12-13 14:19:14.378611343 +0000 UTC m=+1.201669256" Dec 13 14:19:15.291584 kubelet[2218]: E1213 14:19:15.291551 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:15.292588 kubelet[2218]: E1213 14:19:15.292574 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:18.388523 kubelet[2218]: E1213 14:19:18.388471 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:18.528601 update_engine[1298]: I1213 14:19:18.528499 1298 update_attempter.cc:509] Updating boot flags... Dec 13 14:19:19.101074 kubelet[2218]: I1213 14:19:19.095390 2218 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=6.095311304 podStartE2EDuration="6.095311304s" podCreationTimestamp="2024-12-13 14:19:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:19:14.378951724 +0000 UTC m=+1.202009637" watchObservedRunningTime="2024-12-13 14:19:19.095311304 +0000 UTC m=+5.918369237" Dec 13 14:19:19.137886 sudo[1478]: pam_unix(sudo:session): session closed for user root Dec 13 14:19:19.136000 audit[1478]: USER_END pid=1478 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:19:19.138898 kernel: kauditd_printk_skb: 4 callbacks suppressed Dec 13 14:19:19.138991 kernel: audit: type=1106 audit(1734099559.136:226): pid=1478 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:19:19.139579 sshd[1473]: pam_unix(sshd:session): session closed for user core Dec 13 14:19:19.136000 audit[1478]: CRED_DISP pid=1478 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:19:19.142841 systemd[1]: sshd@6-10.0.0.24:22-10.0.0.1:49930.service: Deactivated successfully. Dec 13 14:19:19.143964 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 14:19:19.145842 kernel: audit: type=1104 audit(1734099559.136:227): pid=1478 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:19:19.145904 kernel: audit: type=1106 audit(1734099559.140:228): pid=1473 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:19:19.140000 audit[1473]: USER_END pid=1473 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:19:19.146293 systemd-logind[1295]: Session 7 logged out. Waiting for processes to exit. Dec 13 14:19:19.147066 systemd-logind[1295]: Removed session 7. Dec 13 14:19:19.140000 audit[1473]: CRED_DISP pid=1473 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:19:19.153851 kernel: audit: type=1104 audit(1734099559.140:229): pid=1473 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:19:19.153887 kernel: audit: type=1131 audit(1734099559.142:230): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.24:22-10.0.0.1:49930 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:19.142000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.24:22-10.0.0.1:49930 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:19.298394 kubelet[2218]: E1213 14:19:19.298332 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:22.784039 kubelet[2218]: E1213 14:19:22.783974 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:23.305132 kubelet[2218]: E1213 14:19:23.305098 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:24.307031 kubelet[2218]: E1213 14:19:24.306973 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:24.783815 kubelet[2218]: E1213 14:19:24.783785 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:25.905931 kubelet[2218]: I1213 14:19:25.905881 2218 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 14:19:25.906396 env[1311]: time="2024-12-13T14:19:25.906311600Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 14:19:25.906621 kubelet[2218]: I1213 14:19:25.906591 2218 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 14:19:26.740225 kubelet[2218]: I1213 14:19:26.740174 2218 topology_manager.go:215] "Topology Admit Handler" podUID="84732371-5963-42e9-a477-5a035eec1cb2" podNamespace="kube-system" podName="kube-proxy-qg7hd" Dec 13 14:19:26.845404 kubelet[2218]: I1213 14:19:26.845329 2218 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znbwm\" (UniqueName: \"kubernetes.io/projected/84732371-5963-42e9-a477-5a035eec1cb2-kube-api-access-znbwm\") pod \"kube-proxy-qg7hd\" (UID: \"84732371-5963-42e9-a477-5a035eec1cb2\") " pod="kube-system/kube-proxy-qg7hd" Dec 13 14:19:26.845404 kubelet[2218]: I1213 14:19:26.845405 2218 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/84732371-5963-42e9-a477-5a035eec1cb2-xtables-lock\") pod \"kube-proxy-qg7hd\" (UID: \"84732371-5963-42e9-a477-5a035eec1cb2\") " pod="kube-system/kube-proxy-qg7hd" Dec 13 14:19:26.845645 kubelet[2218]: I1213 14:19:26.845434 2218 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/84732371-5963-42e9-a477-5a035eec1cb2-lib-modules\") pod \"kube-proxy-qg7hd\" (UID: \"84732371-5963-42e9-a477-5a035eec1cb2\") " pod="kube-system/kube-proxy-qg7hd" Dec 13 14:19:26.845645 kubelet[2218]: I1213 14:19:26.845459 2218 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/84732371-5963-42e9-a477-5a035eec1cb2-kube-proxy\") pod \"kube-proxy-qg7hd\" (UID: \"84732371-5963-42e9-a477-5a035eec1cb2\") " pod="kube-system/kube-proxy-qg7hd" Dec 13 14:19:26.915785 kubelet[2218]: I1213 14:19:26.915718 2218 topology_manager.go:215] "Topology Admit Handler" podUID="82bbce96-114b-407b-85a4-2be32351f168" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-44phz" Dec 13 14:19:26.946601 kubelet[2218]: I1213 14:19:26.946531 2218 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt66b\" (UniqueName: \"kubernetes.io/projected/82bbce96-114b-407b-85a4-2be32351f168-kube-api-access-zt66b\") pod \"tigera-operator-c7ccbd65-44phz\" (UID: \"82bbce96-114b-407b-85a4-2be32351f168\") " pod="tigera-operator/tigera-operator-c7ccbd65-44phz" Dec 13 14:19:26.946601 kubelet[2218]: I1213 14:19:26.946603 2218 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/82bbce96-114b-407b-85a4-2be32351f168-var-lib-calico\") pod \"tigera-operator-c7ccbd65-44phz\" (UID: \"82bbce96-114b-407b-85a4-2be32351f168\") " pod="tigera-operator/tigera-operator-c7ccbd65-44phz" Dec 13 14:19:27.046643 kubelet[2218]: E1213 14:19:27.046496 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:27.047586 env[1311]: time="2024-12-13T14:19:27.047527980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qg7hd,Uid:84732371-5963-42e9-a477-5a035eec1cb2,Namespace:kube-system,Attempt:0,}" Dec 13 14:19:27.072442 env[1311]: time="2024-12-13T14:19:27.072329449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:19:27.072442 env[1311]: time="2024-12-13T14:19:27.072435959Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:19:27.072669 env[1311]: time="2024-12-13T14:19:27.072460316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:19:27.072775 env[1311]: time="2024-12-13T14:19:27.072732431Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c7d3276f2eda2d8598d8c40b472f29fca5f25956cabfbee243d05d8052915138 pid=2329 runtime=io.containerd.runc.v2 Dec 13 14:19:27.107643 env[1311]: time="2024-12-13T14:19:27.107572869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qg7hd,Uid:84732371-5963-42e9-a477-5a035eec1cb2,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7d3276f2eda2d8598d8c40b472f29fca5f25956cabfbee243d05d8052915138\"" Dec 13 14:19:27.108426 kubelet[2218]: E1213 14:19:27.108372 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:27.110432 env[1311]: time="2024-12-13T14:19:27.110396423Z" level=info msg="CreateContainer within sandbox \"c7d3276f2eda2d8598d8c40b472f29fca5f25956cabfbee243d05d8052915138\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 14:19:27.130085 env[1311]: time="2024-12-13T14:19:27.130006025Z" level=info msg="CreateContainer within sandbox \"c7d3276f2eda2d8598d8c40b472f29fca5f25956cabfbee243d05d8052915138\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"60b2cfb014684bb23ac92ccabf626b7b6c4bf62b1dc899c175bf937ae8546c49\"" Dec 13 14:19:27.130849 env[1311]: time="2024-12-13T14:19:27.130711280Z" level=info msg="StartContainer for \"60b2cfb014684bb23ac92ccabf626b7b6c4bf62b1dc899c175bf937ae8546c49\"" Dec 13 14:19:27.186603 env[1311]: time="2024-12-13T14:19:27.186518297Z" level=info msg="StartContainer for \"60b2cfb014684bb23ac92ccabf626b7b6c4bf62b1dc899c175bf937ae8546c49\" returns successfully" Dec 13 14:19:27.219385 env[1311]: time="2024-12-13T14:19:27.219331458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-44phz,Uid:82bbce96-114b-407b-85a4-2be32351f168,Namespace:tigera-operator,Attempt:0,}" Dec 13 14:19:27.238208 env[1311]: time="2024-12-13T14:19:27.237972026Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:19:27.238208 env[1311]: time="2024-12-13T14:19:27.238053712Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:19:27.238208 env[1311]: time="2024-12-13T14:19:27.238064191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:19:27.238728 env[1311]: time="2024-12-13T14:19:27.238622788Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e69d38e28b94ef19ea015bf960127b467728ab26cb08ffd9a6b0394ec8c7e828 pid=2416 runtime=io.containerd.runc.v2 Dec 13 14:19:27.258076 kernel: audit: type=1325 audit(1734099567.245:231): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2444 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:19:27.258277 kernel: audit: type=1300 audit(1734099567.245:231): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe61980f20 a2=0 a3=7ffe61980f0c items=0 ppid=2379 pid=2444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:27.258310 kernel: audit: type=1327 audit(1734099567.245:231): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 13 14:19:27.245000 audit[2444]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2444 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:19:27.245000 audit[2444]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe61980f20 a2=0 a3=7ffe61980f0c items=0 ppid=2379 pid=2444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:27.245000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 13 14:19:27.245000 audit[2445]: NETFILTER_CFG table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2445 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:19:27.261123 kernel: audit: type=1325 audit(1734099567.245:232): table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2445 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:19:27.261198 kernel: audit: type=1300 audit(1734099567.245:232): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd91250910 a2=0 a3=7ffd912508fc items=0 ppid=2379 pid=2445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:27.245000 audit[2445]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd91250910 a2=0 a3=7ffd912508fc items=0 ppid=2379 pid=2445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:27.245000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 13 14:19:27.269034 kernel: audit: type=1327 audit(1734099567.245:232): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 13 14:19:27.269100 kernel: audit: type=1325 audit(1734099567.248:233): table=nat:40 family=10 entries=1 op=nft_register_chain pid=2447 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:19:27.248000 audit[2447]: NETFILTER_CFG table=nat:40 family=10 entries=1 op=nft_register_chain pid=2447 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:19:27.248000 audit[2447]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffceb5324b0 a2=0 a3=7ffceb53249c items=0 ppid=2379 pid=2447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:27.277154 kernel: audit: type=1300 audit(1734099567.248:233): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffceb5324b0 a2=0 a3=7ffceb53249c items=0 ppid=2379 pid=2447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:27.277245 kernel: audit: type=1327 audit(1734099567.248:233): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 13 14:19:27.248000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 13 14:19:27.254000 audit[2450]: NETFILTER_CFG table=filter:41 family=10 entries=1 op=nft_register_chain pid=2450 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:19:27.282133 kernel: audit: type=1325 audit(1734099567.254:234): table=filter:41 family=10 entries=1 op=nft_register_chain pid=2450 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:19:27.254000 audit[2450]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe957f13f0 a2=0 a3=7ffe957f13dc items=0 ppid=2379 pid=2450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:27.254000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 13 14:19:27.255000 audit[2448]: NETFILTER_CFG table=nat:42 family=2 entries=1 op=nft_register_chain pid=2448 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:19:27.255000 audit[2448]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd35862bd0 a2=0 a3=7ffd35862bbc items=0 ppid=2379 pid=2448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:27.255000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 13 14:19:27.257000 audit[2455]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2455 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:19:27.257000 audit[2455]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffea445b8d0 a2=0 a3=7ffea445b8bc items=0 ppid=2379 pid=2455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:27.257000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 13 14:19:27.305419 env[1311]: time="2024-12-13T14:19:27.305247079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-44phz,Uid:82bbce96-114b-407b-85a4-2be32351f168,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e69d38e28b94ef19ea015bf960127b467728ab26cb08ffd9a6b0394ec8c7e828\"" Dec 13 14:19:27.309592 env[1311]: time="2024-12-13T14:19:27.308349430Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 14:19:27.314609 kubelet[2218]: E1213 14:19:27.314579 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:27.327176 kubelet[2218]: I1213 14:19:27.326969 2218 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-qg7hd" podStartSLOduration=1.326915637 podStartE2EDuration="1.326915637s" podCreationTimestamp="2024-12-13 14:19:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:19:27.326807563 +0000 UTC m=+14.149865476" watchObservedRunningTime="2024-12-13 14:19:27.326915637 +0000 UTC m=+14.149973551" Dec 13 14:19:27.352000 audit[2468]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2468 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:19:27.352000 audit[2468]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffdaff2d230 a2=0 a3=7ffdaff2d21c items=0 ppid=2379 pid=2468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:27.352000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 13 14:19:27.356000 audit[2470]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2470 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:19:27.356000 audit[2470]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffcf396cc30 a2=0 a3=7ffcf396cc1c items=0 ppid=2379 pid=2470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:27.356000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 13 14:19:27.360000 audit[2473]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2473 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:19:27.360000 audit[2473]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc0accd050 a2=0 a3=7ffc0accd03c items=0 ppid=2379 pid=2473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:27.360000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 13 14:19:27.361000 audit[2474]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2474 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:19:27.361000 audit[2474]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe6923f600 a2=0 a3=7ffe6923f5ec items=0 ppid=2379 pid=2474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:27.361000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 13 14:19:27.364000 audit[2476]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2476 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:19:27.364000 audit[2476]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff6be90070 a2=0 a3=7fff6be9005c items=0 ppid=2379 pid=2476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:27.364000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 13 14:19:27.365000 audit[2477]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2477 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:19:27.365000 audit[2477]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc1d539500 a2=0 a3=7ffc1d5394ec items=0 ppid=2379 pid=2477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:27.365000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 13 14:19:27.369000 audit[2479]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2479 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:19:27.369000 audit[2479]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffdb59079c0 a2=0 a3=7ffdb59079ac items=0 ppid=2379 pid=2479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:27.369000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 13 14:19:27.376000 audit[2482]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2482 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:19:27.376000 audit[2482]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff1cdbedb0 a2=0 a3=7fff1cdbed9c items=0 ppid=2379 pid=2482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:27.376000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 13 14:19:27.378000 audit[2483]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2483 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:19:27.378000 audit[2483]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd57bfb5a0 a2=0 a3=7ffd57bfb58c items=0 ppid=2379 pid=2483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:27.378000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 13 14:19:27.381000 audit[2485]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2485 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:19:27.381000 audit[2485]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffeb6a3b980 a2=0 a3=7ffeb6a3b96c items=0 ppid=2379 pid=2485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:27.381000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 13 14:19:27.382000 audit[2486]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2486 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:19:27.382000 audit[2486]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd3d37b700 a2=0 a3=7ffd3d37b6ec items=0 ppid=2379 pid=2486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:27.382000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 13 14:19:27.384000 audit[2488]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2488 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:19:27.384000 audit[2488]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdf8fb0150 a2=0 a3=7ffdf8fb013c items=0 ppid=2379 pid=2488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:27.384000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 13 14:19:27.388000 audit[2491]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2491 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:19:27.388000 audit[2491]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcb7364cc0 a2=0 a3=7ffcb7364cac items=0 ppid=2379 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:27.388000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 13 14:19:27.392000 audit[2494]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2494 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:19:27.392000 audit[2494]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdb8631b10 a2=0 a3=7ffdb8631afc items=0 ppid=2379 pid=2494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:27.392000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 13 14:19:27.393000 audit[2495]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2495 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:19:27.393000 audit[2495]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff5c90dee0 a2=0 a3=7fff5c90decc items=0 ppid=2379 pid=2495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:27.393000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 13 14:19:27.396000 audit[2497]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2497 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:19:27.396000 audit[2497]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fff30932600 a2=0 a3=7fff309325ec items=0 ppid=2379 pid=2497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:27.396000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 14:19:27.399000 audit[2500]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2500 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:19:27.399000 audit[2500]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff0d793100 a2=0 a3=7fff0d7930ec items=0 ppid=2379 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:27.399000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 14:19:27.400000 audit[2501]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2501 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:19:27.400000 audit[2501]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffca06e3c60 a2=0 a3=7ffca06e3c4c items=0 ppid=2379 pid=2501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:27.400000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 13 14:19:27.403000 audit[2503]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2503 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:19:27.403000 audit[2503]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffddfd75480 a2=0 a3=7ffddfd7546c items=0 ppid=2379 pid=2503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:27.403000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 13 14:19:27.423000 audit[2509]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2509 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:19:27.423000 audit[2509]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffd436f7f90 a2=0 a3=7ffd436f7f7c items=0 ppid=2379 pid=2509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:27.423000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:19:27.432000 audit[2509]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2509 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:19:27.432000 audit[2509]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffd436f7f90 a2=0 a3=7ffd436f7f7c items=0 ppid=2379 pid=2509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:27.432000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:19:27.434000 audit[2515]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2515 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:19:27.434000 audit[2515]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd0f64bc00 a2=0 a3=7ffd0f64bbec items=0 ppid=2379 pid=2515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:27.434000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 13 14:19:27.436000 audit[2517]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2517 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:19:27.436000 audit[2517]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffea762b790 a2=0 a3=7ffea762b77c items=0 ppid=2379 pid=2517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:27.436000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 13 14:19:27.440000 audit[2520]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2520 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:19:27.440000 audit[2520]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffec30c6a90 a2=0 a3=7ffec30c6a7c items=0 ppid=2379 pid=2520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:27.440000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 13 14:19:27.441000 audit[2521]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2521 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:19:27.441000 audit[2521]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffa3b70d80 a2=0 a3=7fffa3b70d6c items=0 ppid=2379 pid=2521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:27.441000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 13 14:19:27.444000 audit[2523]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2523 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:19:27.444000 audit[2523]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffe91d39f0 a2=0 a3=7fffe91d39dc items=0 ppid=2379 pid=2523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:27.444000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 13 14:19:27.445000 audit[2524]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2524 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:19:27.445000 audit[2524]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe47af54a0 a2=0 a3=7ffe47af548c items=0 ppid=2379 pid=2524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:27.445000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 13 14:19:27.448000 audit[2526]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2526 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:19:27.448000 audit[2526]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd0bda1cc0 a2=0 a3=7ffd0bda1cac items=0 ppid=2379 pid=2526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:27.448000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 13 14:19:27.452000 audit[2529]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2529 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:19:27.452000 audit[2529]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fff73e56550 a2=0 a3=7fff73e5653c items=0 ppid=2379 pid=2529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:27.452000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 13 14:19:27.453000 audit[2530]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2530 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:19:27.453000 audit[2530]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff880784c0 a2=0 a3=7fff880784ac items=0 ppid=2379 pid=2530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:27.453000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 13 14:19:27.455000 audit[2532]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2532 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:19:27.455000 audit[2532]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc3b0dcad0 a2=0 a3=7ffc3b0dcabc items=0 ppid=2379 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:27.455000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 13 14:19:27.456000 audit[2533]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2533 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:19:27.456000 audit[2533]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdcd09e300 a2=0 a3=7ffdcd09e2ec items=0 ppid=2379 pid=2533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:27.456000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 13 14:19:27.459000 audit[2535]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2535 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:19:27.459000 audit[2535]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff8436b0e0 a2=0 a3=7fff8436b0cc items=0 ppid=2379 pid=2535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:27.459000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 13 14:19:27.462000 audit[2538]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2538 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:19:27.462000 audit[2538]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff0ea7cfc0 a2=0 a3=7fff0ea7cfac items=0 ppid=2379 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:27.462000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 13 14:19:27.466000 audit[2541]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2541 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:19:27.466000 audit[2541]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe5a98ad80 a2=0 a3=7ffe5a98ad6c items=0 ppid=2379 pid=2541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:27.466000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 13 14:19:27.467000 audit[2542]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2542 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:19:27.467000 audit[2542]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe484a7370 a2=0 a3=7ffe484a735c items=0 ppid=2379 pid=2542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:27.467000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 13 14:19:27.469000 audit[2544]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2544 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:19:27.469000 audit[2544]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffc412a81d0 a2=0 a3=7ffc412a81bc items=0 ppid=2379 pid=2544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:27.469000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 14:19:27.472000 audit[2547]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2547 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:19:27.472000 audit[2547]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7fffdca840e0 a2=0 a3=7fffdca840cc items=0 ppid=2379 pid=2547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:27.472000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 14:19:27.473000 audit[2548]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2548 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:19:27.473000 audit[2548]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcf9f6e580 a2=0 a3=7ffcf9f6e56c items=0 ppid=2379 pid=2548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:27.473000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 13 14:19:27.475000 audit[2550]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2550 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:19:27.475000 audit[2550]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffc376db770 a2=0 a3=7ffc376db75c items=0 ppid=2379 pid=2550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:27.475000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 13 14:19:27.476000 audit[2551]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2551 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:19:27.476000 audit[2551]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe30040a80 a2=0 a3=7ffe30040a6c items=0 ppid=2379 pid=2551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:27.476000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 13 14:19:27.479000 audit[2553]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2553 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:19:27.479000 audit[2553]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff03dc1b40 a2=0 a3=7fff03dc1b2c items=0 ppid=2379 pid=2553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:27.479000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 14:19:27.482000 audit[2556]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2556 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:19:27.482000 audit[2556]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe53454cb0 a2=0 a3=7ffe53454c9c items=0 ppid=2379 pid=2556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:27.482000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 14:19:27.484000 audit[2558]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2558 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 13 14:19:27.484000 audit[2558]: SYSCALL arch=c000003e syscall=46 success=yes exit=2004 a0=3 a1=7ffcf2d32d30 a2=0 a3=7ffcf2d32d1c items=0 ppid=2379 pid=2558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:27.484000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:19:27.485000 audit[2558]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2558 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 13 14:19:27.485000 audit[2558]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffcf2d32d30 a2=0 a3=7ffcf2d32d1c items=0 ppid=2379 pid=2558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:27.485000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:19:30.592879 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2857248766.mount: Deactivated successfully. Dec 13 14:19:31.273188 env[1311]: time="2024-12-13T14:19:31.273079399Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.36.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:31.275911 env[1311]: time="2024-12-13T14:19:31.275858187Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:31.277927 env[1311]: time="2024-12-13T14:19:31.277871699Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.36.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:31.280400 env[1311]: time="2024-12-13T14:19:31.280334892Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:31.281139 env[1311]: time="2024-12-13T14:19:31.281061534Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Dec 13 14:19:31.283176 env[1311]: time="2024-12-13T14:19:31.283095736Z" level=info msg="CreateContainer within sandbox \"e69d38e28b94ef19ea015bf960127b467728ab26cb08ffd9a6b0394ec8c7e828\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 14:19:31.304174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3583362711.mount: Deactivated successfully. Dec 13 14:19:31.308644 env[1311]: time="2024-12-13T14:19:31.308580468Z" level=info msg="CreateContainer within sandbox \"e69d38e28b94ef19ea015bf960127b467728ab26cb08ffd9a6b0394ec8c7e828\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"0e5a3685eaf10d7eba0e5e4ffb7085e2139dc7b53db2e205efba80df566f09fb\"" Dec 13 14:19:31.309420 env[1311]: time="2024-12-13T14:19:31.309383945Z" level=info msg="StartContainer for \"0e5a3685eaf10d7eba0e5e4ffb7085e2139dc7b53db2e205efba80df566f09fb\"" Dec 13 14:19:31.363976 env[1311]: time="2024-12-13T14:19:31.363762247Z" level=info msg="StartContainer for \"0e5a3685eaf10d7eba0e5e4ffb7085e2139dc7b53db2e205efba80df566f09fb\" returns successfully" Dec 13 14:19:33.289695 kubelet[2218]: I1213 14:19:33.289590 2218 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-44phz" podStartSLOduration=3.31565611 podStartE2EDuration="7.289531644s" podCreationTimestamp="2024-12-13 14:19:26 +0000 UTC" firstStartedPulling="2024-12-13 14:19:27.307635479 +0000 UTC m=+14.130693392" lastFinishedPulling="2024-12-13 14:19:31.281511013 +0000 UTC m=+18.104568926" observedRunningTime="2024-12-13 14:19:32.340121831 +0000 UTC m=+19.163179744" watchObservedRunningTime="2024-12-13 14:19:33.289531644 +0000 UTC m=+20.112589577" Dec 13 14:19:34.174000 audit[2598]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2598 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:19:34.177388 kernel: kauditd_printk_skb: 143 callbacks suppressed Dec 13 14:19:34.177458 kernel: audit: type=1325 audit(1734099574.174:282): table=filter:89 family=2 entries=15 op=nft_register_rule pid=2598 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:19:34.174000 audit[2598]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fffe91e80f0 a2=0 a3=7fffe91e80dc items=0 ppid=2379 pid=2598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:34.186665 kernel: audit: type=1300 audit(1734099574.174:282): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fffe91e80f0 a2=0 a3=7fffe91e80dc items=0 ppid=2379 pid=2598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:34.186749 kernel: audit: type=1327 audit(1734099574.174:282): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:19:34.174000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:19:34.190000 audit[2598]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2598 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:19:34.190000 audit[2598]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffe91e80f0 a2=0 a3=0 items=0 ppid=2379 pid=2598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:34.201125 kernel: audit: type=1325 audit(1734099574.190:283): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2598 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:19:34.201316 kernel: audit: type=1300 audit(1734099574.190:283): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffe91e80f0 a2=0 a3=0 items=0 ppid=2379 pid=2598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:34.201342 kernel: audit: type=1327 audit(1734099574.190:283): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:19:34.190000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:19:34.211000 audit[2600]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2600 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:19:34.211000 audit[2600]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffcd15240e0 a2=0 a3=7ffcd15240cc items=0 ppid=2379 pid=2600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:34.220910 kernel: audit: type=1325 audit(1734099574.211:284): table=filter:91 family=2 entries=16 op=nft_register_rule pid=2600 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:19:34.221166 kernel: audit: type=1300 audit(1734099574.211:284): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffcd15240e0 a2=0 a3=7ffcd15240cc items=0 ppid=2379 pid=2600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:34.221228 kernel: audit: type=1327 audit(1734099574.211:284): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:19:34.211000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:19:34.224000 audit[2600]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2600 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:19:34.224000 audit[2600]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcd15240e0 a2=0 a3=0 items=0 ppid=2379 pid=2600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:34.224000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:19:34.229056 kernel: audit: type=1325 audit(1734099574.224:285): table=nat:92 family=2 entries=12 op=nft_register_rule pid=2600 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:19:34.439491 kubelet[2218]: I1213 14:19:34.439330 2218 topology_manager.go:215] "Topology Admit Handler" podUID="59e7ad6a-ffb7-4da1-b9e3-d4f8d547117e" podNamespace="calico-system" podName="calico-typha-64d66d978b-lc7ml" Dec 13 14:19:34.490271 kubelet[2218]: I1213 14:19:34.490214 2218 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/59e7ad6a-ffb7-4da1-b9e3-d4f8d547117e-typha-certs\") pod \"calico-typha-64d66d978b-lc7ml\" (UID: \"59e7ad6a-ffb7-4da1-b9e3-d4f8d547117e\") " pod="calico-system/calico-typha-64d66d978b-lc7ml" Dec 13 14:19:34.490271 kubelet[2218]: I1213 14:19:34.490278 2218 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59e7ad6a-ffb7-4da1-b9e3-d4f8d547117e-tigera-ca-bundle\") pod \"calico-typha-64d66d978b-lc7ml\" (UID: \"59e7ad6a-ffb7-4da1-b9e3-d4f8d547117e\") " pod="calico-system/calico-typha-64d66d978b-lc7ml" Dec 13 14:19:34.490542 kubelet[2218]: I1213 14:19:34.490308 2218 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdkxz\" (UniqueName: \"kubernetes.io/projected/59e7ad6a-ffb7-4da1-b9e3-d4f8d547117e-kube-api-access-pdkxz\") pod \"calico-typha-64d66d978b-lc7ml\" (UID: \"59e7ad6a-ffb7-4da1-b9e3-d4f8d547117e\") " pod="calico-system/calico-typha-64d66d978b-lc7ml" Dec 13 14:19:34.563179 kubelet[2218]: I1213 14:19:34.563119 2218 topology_manager.go:215] "Topology Admit Handler" podUID="cd51ba47-d776-45fc-b4dc-defc43f7ab17" podNamespace="calico-system" podName="calico-node-xtcr5" Dec 13 14:19:34.591139 kubelet[2218]: I1213 14:19:34.591072 2218 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/cd51ba47-d776-45fc-b4dc-defc43f7ab17-policysync\") pod \"calico-node-xtcr5\" (UID: \"cd51ba47-d776-45fc-b4dc-defc43f7ab17\") " pod="calico-system/calico-node-xtcr5" Dec 13 14:19:34.591139 kubelet[2218]: I1213 14:19:34.591141 2218 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd51ba47-d776-45fc-b4dc-defc43f7ab17-tigera-ca-bundle\") pod \"calico-node-xtcr5\" (UID: \"cd51ba47-d776-45fc-b4dc-defc43f7ab17\") " pod="calico-system/calico-node-xtcr5" Dec 13 14:19:34.591372 kubelet[2218]: I1213 14:19:34.591164 2218 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/cd51ba47-d776-45fc-b4dc-defc43f7ab17-cni-bin-dir\") pod \"calico-node-xtcr5\" (UID: \"cd51ba47-d776-45fc-b4dc-defc43f7ab17\") " pod="calico-system/calico-node-xtcr5" Dec 13 14:19:34.591372 kubelet[2218]: I1213 14:19:34.591268 2218 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/cd51ba47-d776-45fc-b4dc-defc43f7ab17-cni-log-dir\") pod \"calico-node-xtcr5\" (UID: \"cd51ba47-d776-45fc-b4dc-defc43f7ab17\") " pod="calico-system/calico-node-xtcr5" Dec 13 14:19:34.591434 kubelet[2218]: I1213 14:19:34.591411 2218 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/cd51ba47-d776-45fc-b4dc-defc43f7ab17-var-run-calico\") pod \"calico-node-xtcr5\" (UID: \"cd51ba47-d776-45fc-b4dc-defc43f7ab17\") " pod="calico-system/calico-node-xtcr5" Dec 13 14:19:34.591460 kubelet[2218]: I1213 14:19:34.591448 2218 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cd51ba47-d776-45fc-b4dc-defc43f7ab17-lib-modules\") pod \"calico-node-xtcr5\" (UID: \"cd51ba47-d776-45fc-b4dc-defc43f7ab17\") " pod="calico-system/calico-node-xtcr5" Dec 13 14:19:34.591487 kubelet[2218]: I1213 14:19:34.591481 2218 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cd51ba47-d776-45fc-b4dc-defc43f7ab17-xtables-lock\") pod \"calico-node-xtcr5\" (UID: \"cd51ba47-d776-45fc-b4dc-defc43f7ab17\") " pod="calico-system/calico-node-xtcr5" Dec 13 14:19:34.591535 kubelet[2218]: I1213 14:19:34.591510 2218 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/cd51ba47-d776-45fc-b4dc-defc43f7ab17-node-certs\") pod \"calico-node-xtcr5\" (UID: \"cd51ba47-d776-45fc-b4dc-defc43f7ab17\") " pod="calico-system/calico-node-xtcr5" Dec 13 14:19:34.591652 kubelet[2218]: I1213 14:19:34.591602 2218 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/cd51ba47-d776-45fc-b4dc-defc43f7ab17-flexvol-driver-host\") pod \"calico-node-xtcr5\" (UID: \"cd51ba47-d776-45fc-b4dc-defc43f7ab17\") " pod="calico-system/calico-node-xtcr5" Dec 13 14:19:34.591822 kubelet[2218]: I1213 14:19:34.591681 2218 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjchm\" (UniqueName: \"kubernetes.io/projected/cd51ba47-d776-45fc-b4dc-defc43f7ab17-kube-api-access-hjchm\") pod \"calico-node-xtcr5\" (UID: \"cd51ba47-d776-45fc-b4dc-defc43f7ab17\") " pod="calico-system/calico-node-xtcr5" Dec 13 14:19:34.591822 kubelet[2218]: I1213 14:19:34.591711 2218 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/cd51ba47-d776-45fc-b4dc-defc43f7ab17-cni-net-dir\") pod \"calico-node-xtcr5\" (UID: \"cd51ba47-d776-45fc-b4dc-defc43f7ab17\") " pod="calico-system/calico-node-xtcr5" Dec 13 14:19:34.591822 kubelet[2218]: I1213 14:19:34.591738 2218 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cd51ba47-d776-45fc-b4dc-defc43f7ab17-var-lib-calico\") pod \"calico-node-xtcr5\" (UID: \"cd51ba47-d776-45fc-b4dc-defc43f7ab17\") " pod="calico-system/calico-node-xtcr5" Dec 13 14:19:34.690137 kubelet[2218]: I1213 14:19:34.689948 2218 topology_manager.go:215] "Topology Admit Handler" podUID="b485a5b2-c009-42f0-8598-051c15f90fca" podNamespace="calico-system" podName="csi-node-driver-g44h2" Dec 13 14:19:34.690320 kubelet[2218]: E1213 14:19:34.690303 2218 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g44h2" podUID="b485a5b2-c009-42f0-8598-051c15f90fca" Dec 13 14:19:34.702800 kubelet[2218]: E1213 14:19:34.702748 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.702800 kubelet[2218]: W1213 14:19:34.702774 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.702800 kubelet[2218]: E1213 14:19:34.702803 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.714165 kubelet[2218]: E1213 14:19:34.714115 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.714165 kubelet[2218]: W1213 14:19:34.714149 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.714165 kubelet[2218]: E1213 14:19:34.714181 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.745003 kubelet[2218]: E1213 14:19:34.744924 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:34.745897 env[1311]: time="2024-12-13T14:19:34.745706956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64d66d978b-lc7ml,Uid:59e7ad6a-ffb7-4da1-b9e3-d4f8d547117e,Namespace:calico-system,Attempt:0,}" Dec 13 14:19:34.777612 env[1311]: time="2024-12-13T14:19:34.777489867Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:19:34.777612 env[1311]: time="2024-12-13T14:19:34.777539531Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:19:34.777612 env[1311]: time="2024-12-13T14:19:34.777551183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:19:34.778210 env[1311]: time="2024-12-13T14:19:34.778132139Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3fdc5a45fa8e37ed31ba8d5e2766c35fb504677c2fe843d0567038c0c6083b7d pid=2620 runtime=io.containerd.runc.v2 Dec 13 14:19:34.784554 kubelet[2218]: E1213 14:19:34.784496 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.784554 kubelet[2218]: W1213 14:19:34.784530 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.784554 kubelet[2218]: E1213 14:19:34.784562 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.784953 kubelet[2218]: E1213 14:19:34.784935 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.784953 kubelet[2218]: W1213 14:19:34.784950 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.785241 kubelet[2218]: E1213 14:19:34.784969 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.785317 kubelet[2218]: E1213 14:19:34.785252 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.785317 kubelet[2218]: W1213 14:19:34.785262 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.785317 kubelet[2218]: E1213 14:19:34.785275 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.785502 kubelet[2218]: E1213 14:19:34.785489 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.785502 kubelet[2218]: W1213 14:19:34.785496 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.785502 kubelet[2218]: E1213 14:19:34.785507 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.785694 kubelet[2218]: E1213 14:19:34.785632 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.785694 kubelet[2218]: W1213 14:19:34.785639 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.785694 kubelet[2218]: E1213 14:19:34.785648 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.785907 kubelet[2218]: E1213 14:19:34.785760 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.785907 kubelet[2218]: W1213 14:19:34.785768 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.785907 kubelet[2218]: E1213 14:19:34.785777 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.785907 kubelet[2218]: E1213 14:19:34.785883 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.785907 kubelet[2218]: W1213 14:19:34.785889 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.785907 kubelet[2218]: E1213 14:19:34.785899 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.786487 kubelet[2218]: E1213 14:19:34.786212 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.786487 kubelet[2218]: W1213 14:19:34.786222 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.786487 kubelet[2218]: E1213 14:19:34.786234 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.786811 kubelet[2218]: E1213 14:19:34.786594 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.786811 kubelet[2218]: W1213 14:19:34.786631 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.786811 kubelet[2218]: E1213 14:19:34.786648 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.786987 kubelet[2218]: E1213 14:19:34.786904 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.786987 kubelet[2218]: W1213 14:19:34.786914 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.786987 kubelet[2218]: E1213 14:19:34.786928 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.787236 kubelet[2218]: E1213 14:19:34.787161 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.787236 kubelet[2218]: W1213 14:19:34.787171 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.787236 kubelet[2218]: E1213 14:19:34.787182 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.787447 kubelet[2218]: E1213 14:19:34.787414 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.787447 kubelet[2218]: W1213 14:19:34.787423 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.787447 kubelet[2218]: E1213 14:19:34.787434 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.787705 kubelet[2218]: E1213 14:19:34.787688 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.787705 kubelet[2218]: W1213 14:19:34.787702 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.787705 kubelet[2218]: E1213 14:19:34.787715 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.788006 kubelet[2218]: E1213 14:19:34.787974 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.788129 kubelet[2218]: W1213 14:19:34.788044 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.788129 kubelet[2218]: E1213 14:19:34.788063 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.789816 kubelet[2218]: E1213 14:19:34.789763 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.789915 kubelet[2218]: W1213 14:19:34.789820 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.789915 kubelet[2218]: E1213 14:19:34.789842 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.790404 kubelet[2218]: E1213 14:19:34.790382 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.790404 kubelet[2218]: W1213 14:19:34.790399 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.790566 kubelet[2218]: E1213 14:19:34.790416 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.790749 kubelet[2218]: E1213 14:19:34.790721 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.790749 kubelet[2218]: W1213 14:19:34.790739 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.790749 kubelet[2218]: E1213 14:19:34.790751 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.791152 kubelet[2218]: E1213 14:19:34.791127 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.791152 kubelet[2218]: W1213 14:19:34.791144 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.791152 kubelet[2218]: E1213 14:19:34.791157 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.791476 kubelet[2218]: E1213 14:19:34.791451 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.791476 kubelet[2218]: W1213 14:19:34.791466 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.791476 kubelet[2218]: E1213 14:19:34.791479 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.792306 kubelet[2218]: E1213 14:19:34.792279 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.792306 kubelet[2218]: W1213 14:19:34.792298 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.792391 kubelet[2218]: E1213 14:19:34.792314 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.793716 kubelet[2218]: E1213 14:19:34.793689 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.793716 kubelet[2218]: W1213 14:19:34.793712 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.793863 kubelet[2218]: E1213 14:19:34.793742 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.793863 kubelet[2218]: I1213 14:19:34.793797 2218 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b485a5b2-c009-42f0-8598-051c15f90fca-kubelet-dir\") pod \"csi-node-driver-g44h2\" (UID: \"b485a5b2-c009-42f0-8598-051c15f90fca\") " pod="calico-system/csi-node-driver-g44h2" Dec 13 14:19:34.800047 kubelet[2218]: E1213 14:19:34.794735 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.800047 kubelet[2218]: W1213 14:19:34.794752 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.800047 kubelet[2218]: E1213 14:19:34.794769 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.800047 kubelet[2218]: I1213 14:19:34.794795 2218 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b485a5b2-c009-42f0-8598-051c15f90fca-socket-dir\") pod \"csi-node-driver-g44h2\" (UID: \"b485a5b2-c009-42f0-8598-051c15f90fca\") " pod="calico-system/csi-node-driver-g44h2" Dec 13 14:19:34.800047 kubelet[2218]: E1213 14:19:34.795201 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.800047 kubelet[2218]: W1213 14:19:34.795235 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.800047 kubelet[2218]: E1213 14:19:34.795284 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.800047 kubelet[2218]: I1213 14:19:34.795332 2218 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b485a5b2-c009-42f0-8598-051c15f90fca-varrun\") pod \"csi-node-driver-g44h2\" (UID: \"b485a5b2-c009-42f0-8598-051c15f90fca\") " pod="calico-system/csi-node-driver-g44h2" Dec 13 14:19:34.800047 kubelet[2218]: E1213 14:19:34.797697 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.800446 kubelet[2218]: W1213 14:19:34.797714 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.800446 kubelet[2218]: E1213 14:19:34.797874 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.800446 kubelet[2218]: I1213 14:19:34.797907 2218 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b485a5b2-c009-42f0-8598-051c15f90fca-registration-dir\") pod \"csi-node-driver-g44h2\" (UID: \"b485a5b2-c009-42f0-8598-051c15f90fca\") " pod="calico-system/csi-node-driver-g44h2" Dec 13 14:19:34.800446 kubelet[2218]: E1213 14:19:34.797986 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.800446 kubelet[2218]: W1213 14:19:34.797992 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.800446 kubelet[2218]: E1213 14:19:34.798186 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.800446 kubelet[2218]: E1213 14:19:34.798235 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.800446 kubelet[2218]: W1213 14:19:34.798241 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.800446 kubelet[2218]: E1213 14:19:34.798321 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.800677 kubelet[2218]: E1213 14:19:34.798453 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.800677 kubelet[2218]: W1213 14:19:34.798464 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.800677 kubelet[2218]: E1213 14:19:34.798584 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.800677 kubelet[2218]: E1213 14:19:34.798736 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.800677 kubelet[2218]: W1213 14:19:34.798748 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.800677 kubelet[2218]: E1213 14:19:34.798766 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.800677 kubelet[2218]: I1213 14:19:34.798787 2218 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qndl6\" (UniqueName: \"kubernetes.io/projected/b485a5b2-c009-42f0-8598-051c15f90fca-kube-api-access-qndl6\") pod \"csi-node-driver-g44h2\" (UID: \"b485a5b2-c009-42f0-8598-051c15f90fca\") " pod="calico-system/csi-node-driver-g44h2" Dec 13 14:19:34.800677 kubelet[2218]: E1213 14:19:34.799009 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.800677 kubelet[2218]: W1213 14:19:34.799042 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.800952 kubelet[2218]: E1213 14:19:34.799056 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.800952 kubelet[2218]: E1213 14:19:34.799240 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.800952 kubelet[2218]: W1213 14:19:34.799247 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.800952 kubelet[2218]: E1213 14:19:34.799258 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.800952 kubelet[2218]: E1213 14:19:34.799501 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.800952 kubelet[2218]: W1213 14:19:34.799509 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.800952 kubelet[2218]: E1213 14:19:34.799523 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.800952 kubelet[2218]: E1213 14:19:34.799717 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.800952 kubelet[2218]: W1213 14:19:34.799725 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.800952 kubelet[2218]: E1213 14:19:34.799739 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.801485 kubelet[2218]: E1213 14:19:34.799906 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.801485 kubelet[2218]: W1213 14:19:34.799914 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.801485 kubelet[2218]: E1213 14:19:34.799924 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.801965 kubelet[2218]: E1213 14:19:34.801808 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.801965 kubelet[2218]: W1213 14:19:34.801821 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.801965 kubelet[2218]: E1213 14:19:34.801836 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.802785 kubelet[2218]: E1213 14:19:34.802736 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.802785 kubelet[2218]: W1213 14:19:34.802747 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.802785 kubelet[2218]: E1213 14:19:34.802762 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.844969 env[1311]: time="2024-12-13T14:19:34.844903460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64d66d978b-lc7ml,Uid:59e7ad6a-ffb7-4da1-b9e3-d4f8d547117e,Namespace:calico-system,Attempt:0,} returns sandbox id \"3fdc5a45fa8e37ed31ba8d5e2766c35fb504677c2fe843d0567038c0c6083b7d\"" Dec 13 14:19:34.845695 kubelet[2218]: E1213 14:19:34.845669 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:34.847282 env[1311]: time="2024-12-13T14:19:34.847240037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 14:19:34.867988 kubelet[2218]: E1213 14:19:34.867596 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:34.868336 env[1311]: time="2024-12-13T14:19:34.868289032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xtcr5,Uid:cd51ba47-d776-45fc-b4dc-defc43f7ab17,Namespace:calico-system,Attempt:0,}" Dec 13 14:19:34.900157 kubelet[2218]: E1213 14:19:34.900116 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.900157 kubelet[2218]: W1213 14:19:34.900141 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.900157 kubelet[2218]: E1213 14:19:34.900170 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.900471 kubelet[2218]: E1213 14:19:34.900396 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.900471 kubelet[2218]: W1213 14:19:34.900407 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.900471 kubelet[2218]: E1213 14:19:34.900430 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.900650 kubelet[2218]: E1213 14:19:34.900623 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.900650 kubelet[2218]: W1213 14:19:34.900637 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.900650 kubelet[2218]: E1213 14:19:34.900653 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.901006 kubelet[2218]: E1213 14:19:34.900971 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.901097 kubelet[2218]: W1213 14:19:34.901070 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.901125 kubelet[2218]: E1213 14:19:34.901117 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.901296 kubelet[2218]: E1213 14:19:34.901281 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.901296 kubelet[2218]: W1213 14:19:34.901291 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.901367 kubelet[2218]: E1213 14:19:34.901306 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.901485 kubelet[2218]: E1213 14:19:34.901470 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.901485 kubelet[2218]: W1213 14:19:34.901479 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.901555 kubelet[2218]: E1213 14:19:34.901495 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.901776 kubelet[2218]: E1213 14:19:34.901760 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.901776 kubelet[2218]: W1213 14:19:34.901771 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.901847 kubelet[2218]: E1213 14:19:34.901817 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.902104 kubelet[2218]: E1213 14:19:34.902052 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.902104 kubelet[2218]: W1213 14:19:34.902067 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.902104 kubelet[2218]: E1213 14:19:34.902110 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.902406 kubelet[2218]: E1213 14:19:34.902286 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.902406 kubelet[2218]: W1213 14:19:34.902295 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.902406 kubelet[2218]: E1213 14:19:34.902363 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.902483 kubelet[2218]: E1213 14:19:34.902455 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.902483 kubelet[2218]: W1213 14:19:34.902464 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.902919 kubelet[2218]: E1213 14:19:34.902890 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.903104 kubelet[2218]: E1213 14:19:34.903069 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.903104 kubelet[2218]: W1213 14:19:34.903096 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.903104 kubelet[2218]: E1213 14:19:34.903116 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.903331 kubelet[2218]: E1213 14:19:34.903311 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.903331 kubelet[2218]: W1213 14:19:34.903328 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.903398 kubelet[2218]: E1213 14:19:34.903343 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.903600 kubelet[2218]: E1213 14:19:34.903566 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.903600 kubelet[2218]: W1213 14:19:34.903586 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.903692 kubelet[2218]: E1213 14:19:34.903608 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.903973 kubelet[2218]: E1213 14:19:34.903958 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.903973 kubelet[2218]: W1213 14:19:34.903973 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.904069 kubelet[2218]: E1213 14:19:34.903991 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.908496 kubelet[2218]: E1213 14:19:34.908459 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.908496 kubelet[2218]: W1213 14:19:34.908488 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.908716 kubelet[2218]: E1213 14:19:34.908697 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.908762 kubelet[2218]: W1213 14:19:34.908720 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.908989 kubelet[2218]: E1213 14:19:34.908950 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.908989 kubelet[2218]: W1213 14:19:34.908984 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.909262 kubelet[2218]: E1213 14:19:34.909068 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.909343 kubelet[2218]: E1213 14:19:34.909317 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.909343 kubelet[2218]: W1213 14:19:34.909333 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.909343 kubelet[2218]: E1213 14:19:34.909344 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.909490 kubelet[2218]: E1213 14:19:34.909376 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.909530 kubelet[2218]: E1213 14:19:34.909494 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.910159 kubelet[2218]: E1213 14:19:34.910117 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.910226 kubelet[2218]: W1213 14:19:34.910176 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.910258 kubelet[2218]: E1213 14:19:34.910235 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.910942 kubelet[2218]: E1213 14:19:34.910924 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.910942 kubelet[2218]: W1213 14:19:34.910942 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.911074 kubelet[2218]: E1213 14:19:34.910964 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.911319 kubelet[2218]: E1213 14:19:34.911298 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.911319 kubelet[2218]: W1213 14:19:34.911312 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.911472 kubelet[2218]: E1213 14:19:34.911429 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.911568 kubelet[2218]: E1213 14:19:34.911550 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.911568 kubelet[2218]: W1213 14:19:34.911564 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.911638 kubelet[2218]: E1213 14:19:34.911605 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.911774 kubelet[2218]: E1213 14:19:34.911758 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.911774 kubelet[2218]: W1213 14:19:34.911770 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.911841 kubelet[2218]: E1213 14:19:34.911783 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.912073 kubelet[2218]: E1213 14:19:34.912059 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.912073 kubelet[2218]: W1213 14:19:34.912070 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.912167 kubelet[2218]: E1213 14:19:34.912092 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.912296 kubelet[2218]: E1213 14:19:34.912282 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.912296 kubelet[2218]: W1213 14:19:34.912291 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.912368 kubelet[2218]: E1213 14:19:34.912302 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:34.984395 kubelet[2218]: E1213 14:19:34.984325 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:34.984555 kubelet[2218]: W1213 14:19:34.984414 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:34.984555 kubelet[2218]: E1213 14:19:34.984448 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:35.106569 env[1311]: time="2024-12-13T14:19:35.106448091Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:19:35.106767 env[1311]: time="2024-12-13T14:19:35.106565171Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:19:35.106767 env[1311]: time="2024-12-13T14:19:35.106618983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:19:35.106944 env[1311]: time="2024-12-13T14:19:35.106893762Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3da41d5f68afd8308631b4e16b983a97b375567fadca013a2d27b95504bf797b pid=2728 runtime=io.containerd.runc.v2 Dec 13 14:19:35.153436 env[1311]: time="2024-12-13T14:19:35.153372142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xtcr5,Uid:cd51ba47-d776-45fc-b4dc-defc43f7ab17,Namespace:calico-system,Attempt:0,} returns sandbox id \"3da41d5f68afd8308631b4e16b983a97b375567fadca013a2d27b95504bf797b\"" Dec 13 14:19:35.154399 kubelet[2218]: E1213 14:19:35.154355 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:35.237000 audit[2762]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=2762 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:19:35.237000 audit[2762]: SYSCALL arch=c000003e syscall=46 success=yes exit=6652 a0=3 a1=7fffa0819ed0 a2=0 a3=7fffa0819ebc items=0 ppid=2379 pid=2762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:35.237000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:19:35.243000 audit[2762]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2762 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:19:35.243000 audit[2762]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffa0819ed0 a2=0 a3=0 items=0 ppid=2379 pid=2762 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:35.243000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:19:36.272257 kubelet[2218]: E1213 14:19:36.272197 2218 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g44h2" podUID="b485a5b2-c009-42f0-8598-051c15f90fca" Dec 13 14:19:37.423656 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4003978937.mount: Deactivated successfully. Dec 13 14:19:38.271556 kubelet[2218]: E1213 14:19:38.271342 2218 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g44h2" podUID="b485a5b2-c009-42f0-8598-051c15f90fca" Dec 13 14:19:38.821315 env[1311]: time="2024-12-13T14:19:38.821258951Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:38.823897 env[1311]: time="2024-12-13T14:19:38.823845834Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:38.826172 env[1311]: time="2024-12-13T14:19:38.826120420Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:38.828099 env[1311]: time="2024-12-13T14:19:38.828061035Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:38.828523 env[1311]: time="2024-12-13T14:19:38.828488681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Dec 13 14:19:38.829345 env[1311]: time="2024-12-13T14:19:38.829323453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 14:19:38.844269 env[1311]: time="2024-12-13T14:19:38.844190370Z" level=info msg="CreateContainer within sandbox \"3fdc5a45fa8e37ed31ba8d5e2766c35fb504677c2fe843d0567038c0c6083b7d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 14:19:38.864520 env[1311]: time="2024-12-13T14:19:38.864069007Z" level=info msg="CreateContainer within sandbox \"3fdc5a45fa8e37ed31ba8d5e2766c35fb504677c2fe843d0567038c0c6083b7d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f7da53c0ee6293ee3e35a0dae1644da35604b1c397fc190a13c2cb51e7bc87a6\"" Dec 13 14:19:38.867942 env[1311]: time="2024-12-13T14:19:38.867270588Z" level=info msg="StartContainer for \"f7da53c0ee6293ee3e35a0dae1644da35604b1c397fc190a13c2cb51e7bc87a6\"" Dec 13 14:19:38.961780 env[1311]: time="2024-12-13T14:19:38.961691079Z" level=info msg="StartContainer for \"f7da53c0ee6293ee3e35a0dae1644da35604b1c397fc190a13c2cb51e7bc87a6\" returns successfully" Dec 13 14:19:39.346427 kubelet[2218]: E1213 14:19:39.346369 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:39.361301 kubelet[2218]: I1213 14:19:39.361252 2218 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-64d66d978b-lc7ml" podStartSLOduration=1.378841906 podStartE2EDuration="5.361197245s" podCreationTimestamp="2024-12-13 14:19:34 +0000 UTC" firstStartedPulling="2024-12-13 14:19:34.846828081 +0000 UTC m=+21.669885984" lastFinishedPulling="2024-12-13 14:19:38.82918341 +0000 UTC m=+25.652241323" observedRunningTime="2024-12-13 14:19:39.360307137 +0000 UTC m=+26.183365070" watchObservedRunningTime="2024-12-13 14:19:39.361197245 +0000 UTC m=+26.184255178" Dec 13 14:19:39.427933 kubelet[2218]: E1213 14:19:39.427845 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:39.427933 kubelet[2218]: W1213 14:19:39.427918 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:39.428234 kubelet[2218]: E1213 14:19:39.427958 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:39.428447 kubelet[2218]: E1213 14:19:39.428422 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:39.428447 kubelet[2218]: W1213 14:19:39.428435 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:39.428447 kubelet[2218]: E1213 14:19:39.428448 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:39.428701 kubelet[2218]: E1213 14:19:39.428682 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:39.428701 kubelet[2218]: W1213 14:19:39.428694 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:39.428701 kubelet[2218]: E1213 14:19:39.428706 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:39.428957 kubelet[2218]: E1213 14:19:39.428941 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:39.428957 kubelet[2218]: W1213 14:19:39.428952 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:39.429117 kubelet[2218]: E1213 14:19:39.428964 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:39.429295 kubelet[2218]: E1213 14:19:39.429276 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:39.429295 kubelet[2218]: W1213 14:19:39.429289 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:39.429295 kubelet[2218]: E1213 14:19:39.429301 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:39.429474 kubelet[2218]: E1213 14:19:39.429456 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:39.429474 kubelet[2218]: W1213 14:19:39.429467 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:39.429474 kubelet[2218]: E1213 14:19:39.429478 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:39.429646 kubelet[2218]: E1213 14:19:39.429631 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:39.429646 kubelet[2218]: W1213 14:19:39.429642 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:39.429726 kubelet[2218]: E1213 14:19:39.429653 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:39.429843 kubelet[2218]: E1213 14:19:39.429830 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:39.429843 kubelet[2218]: W1213 14:19:39.429840 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:39.429907 kubelet[2218]: E1213 14:19:39.429855 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:39.430077 kubelet[2218]: E1213 14:19:39.430061 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:39.430077 kubelet[2218]: W1213 14:19:39.430073 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:39.430153 kubelet[2218]: E1213 14:19:39.430085 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:39.430264 kubelet[2218]: E1213 14:19:39.430249 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:39.430264 kubelet[2218]: W1213 14:19:39.430262 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:39.430363 kubelet[2218]: E1213 14:19:39.430276 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:39.430458 kubelet[2218]: E1213 14:19:39.430445 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:39.430458 kubelet[2218]: W1213 14:19:39.430455 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:39.430523 kubelet[2218]: E1213 14:19:39.430469 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:39.430654 kubelet[2218]: E1213 14:19:39.430641 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:39.430654 kubelet[2218]: W1213 14:19:39.430651 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:39.430723 kubelet[2218]: E1213 14:19:39.430665 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:39.430861 kubelet[2218]: E1213 14:19:39.430847 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:39.430861 kubelet[2218]: W1213 14:19:39.430858 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:39.430928 kubelet[2218]: E1213 14:19:39.430870 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:39.431078 kubelet[2218]: E1213 14:19:39.431064 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:39.431078 kubelet[2218]: W1213 14:19:39.431075 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:39.431167 kubelet[2218]: E1213 14:19:39.431087 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:39.431289 kubelet[2218]: E1213 14:19:39.431274 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:39.431289 kubelet[2218]: W1213 14:19:39.431285 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:39.431360 kubelet[2218]: E1213 14:19:39.431298 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:39.439043 kubelet[2218]: E1213 14:19:39.438874 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:39.439043 kubelet[2218]: W1213 14:19:39.438924 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:39.439043 kubelet[2218]: E1213 14:19:39.438956 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:39.439401 kubelet[2218]: E1213 14:19:39.439263 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:39.439401 kubelet[2218]: W1213 14:19:39.439273 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:39.439401 kubelet[2218]: E1213 14:19:39.439289 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:39.439636 kubelet[2218]: E1213 14:19:39.439604 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:39.439636 kubelet[2218]: W1213 14:19:39.439621 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:39.439746 kubelet[2218]: E1213 14:19:39.439646 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:39.440075 kubelet[2218]: E1213 14:19:39.440058 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:39.440075 kubelet[2218]: W1213 14:19:39.440072 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:39.440205 kubelet[2218]: E1213 14:19:39.440099 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:39.440355 kubelet[2218]: E1213 14:19:39.440330 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:39.440355 kubelet[2218]: W1213 14:19:39.440343 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:39.440451 kubelet[2218]: E1213 14:19:39.440364 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:39.440628 kubelet[2218]: E1213 14:19:39.440577 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:39.440628 kubelet[2218]: W1213 14:19:39.440590 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:39.440719 kubelet[2218]: E1213 14:19:39.440646 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:39.440800 kubelet[2218]: E1213 14:19:39.440785 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:39.440800 kubelet[2218]: W1213 14:19:39.440797 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:39.440887 kubelet[2218]: E1213 14:19:39.440833 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:39.440981 kubelet[2218]: E1213 14:19:39.440963 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:39.440981 kubelet[2218]: W1213 14:19:39.440975 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:39.441114 kubelet[2218]: E1213 14:19:39.441009 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:39.441238 kubelet[2218]: E1213 14:19:39.441221 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:39.441238 kubelet[2218]: W1213 14:19:39.441233 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:39.441327 kubelet[2218]: E1213 14:19:39.441254 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:39.445492 kubelet[2218]: E1213 14:19:39.441748 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:39.445492 kubelet[2218]: W1213 14:19:39.441765 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:39.445492 kubelet[2218]: E1213 14:19:39.441782 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:39.445492 kubelet[2218]: E1213 14:19:39.442129 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:39.445492 kubelet[2218]: W1213 14:19:39.442139 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:39.445492 kubelet[2218]: E1213 14:19:39.442153 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:39.445492 kubelet[2218]: E1213 14:19:39.442389 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:39.445492 kubelet[2218]: W1213 14:19:39.442410 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:39.445492 kubelet[2218]: E1213 14:19:39.442428 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:39.445492 kubelet[2218]: E1213 14:19:39.442786 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:39.446036 kubelet[2218]: W1213 14:19:39.442795 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:39.446036 kubelet[2218]: E1213 14:19:39.442808 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:39.446036 kubelet[2218]: E1213 14:19:39.442983 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:39.446036 kubelet[2218]: W1213 14:19:39.442991 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:39.446036 kubelet[2218]: E1213 14:19:39.443004 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:39.446036 kubelet[2218]: E1213 14:19:39.443219 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:39.446036 kubelet[2218]: W1213 14:19:39.443227 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:39.446036 kubelet[2218]: E1213 14:19:39.443243 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:39.446036 kubelet[2218]: E1213 14:19:39.443423 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:39.446036 kubelet[2218]: W1213 14:19:39.443431 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:39.446404 kubelet[2218]: E1213 14:19:39.443444 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:39.446404 kubelet[2218]: E1213 14:19:39.443717 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:39.446404 kubelet[2218]: W1213 14:19:39.443725 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:39.446404 kubelet[2218]: E1213 14:19:39.443738 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:39.446404 kubelet[2218]: E1213 14:19:39.446221 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:39.446404 kubelet[2218]: W1213 14:19:39.446245 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:39.446404 kubelet[2218]: E1213 14:19:39.446267 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:40.271692 kubelet[2218]: E1213 14:19:40.271593 2218 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g44h2" podUID="b485a5b2-c009-42f0-8598-051c15f90fca" Dec 13 14:19:40.348053 kubelet[2218]: I1213 14:19:40.347210 2218 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 14:19:40.348053 kubelet[2218]: E1213 14:19:40.347960 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:40.438425 kubelet[2218]: E1213 14:19:40.438373 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:40.438425 kubelet[2218]: W1213 14:19:40.438404 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:40.438425 kubelet[2218]: E1213 14:19:40.438435 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:40.438674 kubelet[2218]: E1213 14:19:40.438615 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:40.438674 kubelet[2218]: W1213 14:19:40.438624 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:40.438674 kubelet[2218]: E1213 14:19:40.438637 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:40.438852 kubelet[2218]: E1213 14:19:40.438830 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:40.438852 kubelet[2218]: W1213 14:19:40.438845 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:40.438904 kubelet[2218]: E1213 14:19:40.438871 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:40.439126 kubelet[2218]: E1213 14:19:40.439104 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:40.439126 kubelet[2218]: W1213 14:19:40.439117 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:40.439126 kubelet[2218]: E1213 14:19:40.439130 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:40.439376 kubelet[2218]: E1213 14:19:40.439356 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:40.439376 kubelet[2218]: W1213 14:19:40.439368 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:40.439434 kubelet[2218]: E1213 14:19:40.439381 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:40.439548 kubelet[2218]: E1213 14:19:40.439537 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:40.439574 kubelet[2218]: W1213 14:19:40.439548 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:40.439574 kubelet[2218]: E1213 14:19:40.439561 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:40.439728 kubelet[2218]: E1213 14:19:40.439713 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:40.439728 kubelet[2218]: W1213 14:19:40.439724 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:40.439794 kubelet[2218]: E1213 14:19:40.439737 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:40.439905 kubelet[2218]: E1213 14:19:40.439891 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:40.439905 kubelet[2218]: W1213 14:19:40.439902 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:40.439977 kubelet[2218]: E1213 14:19:40.439915 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:40.440113 kubelet[2218]: E1213 14:19:40.440102 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:40.440141 kubelet[2218]: W1213 14:19:40.440114 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:40.440141 kubelet[2218]: E1213 14:19:40.440129 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:40.440293 kubelet[2218]: E1213 14:19:40.440280 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:40.440293 kubelet[2218]: W1213 14:19:40.440290 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:40.440370 kubelet[2218]: E1213 14:19:40.440303 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:40.440467 kubelet[2218]: E1213 14:19:40.440454 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:40.440467 kubelet[2218]: W1213 14:19:40.440466 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:40.440516 kubelet[2218]: E1213 14:19:40.440477 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:40.440648 kubelet[2218]: E1213 14:19:40.440629 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:40.440648 kubelet[2218]: W1213 14:19:40.440641 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:40.440699 kubelet[2218]: E1213 14:19:40.440653 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:40.440821 kubelet[2218]: E1213 14:19:40.440810 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:40.440850 kubelet[2218]: W1213 14:19:40.440821 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:40.440850 kubelet[2218]: E1213 14:19:40.440833 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:40.441044 kubelet[2218]: E1213 14:19:40.441033 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:40.441072 kubelet[2218]: W1213 14:19:40.441064 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:40.441101 kubelet[2218]: E1213 14:19:40.441078 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:40.441250 kubelet[2218]: E1213 14:19:40.441233 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:40.441250 kubelet[2218]: W1213 14:19:40.441244 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:40.441331 kubelet[2218]: E1213 14:19:40.441256 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:40.446768 kubelet[2218]: E1213 14:19:40.446736 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:40.446768 kubelet[2218]: W1213 14:19:40.446750 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:40.446768 kubelet[2218]: E1213 14:19:40.446764 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:40.447150 kubelet[2218]: E1213 14:19:40.447129 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:40.447150 kubelet[2218]: W1213 14:19:40.447143 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:40.447239 kubelet[2218]: E1213 14:19:40.447159 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:40.447377 kubelet[2218]: E1213 14:19:40.447364 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:40.447377 kubelet[2218]: W1213 14:19:40.447374 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:40.447472 kubelet[2218]: E1213 14:19:40.447390 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:40.447595 kubelet[2218]: E1213 14:19:40.447573 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:40.447595 kubelet[2218]: W1213 14:19:40.447584 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:40.447661 kubelet[2218]: E1213 14:19:40.447598 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:40.447768 kubelet[2218]: E1213 14:19:40.447758 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:40.447768 kubelet[2218]: W1213 14:19:40.447765 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:40.447830 kubelet[2218]: E1213 14:19:40.447777 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:40.447926 kubelet[2218]: E1213 14:19:40.447912 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:40.447966 kubelet[2218]: W1213 14:19:40.447925 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:40.447966 kubelet[2218]: E1213 14:19:40.447947 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:40.448171 kubelet[2218]: E1213 14:19:40.448157 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:40.448171 kubelet[2218]: W1213 14:19:40.448166 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:40.448256 kubelet[2218]: E1213 14:19:40.448210 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:40.448362 kubelet[2218]: E1213 14:19:40.448349 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:40.448362 kubelet[2218]: W1213 14:19:40.448359 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:40.448466 kubelet[2218]: E1213 14:19:40.448424 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:40.448518 kubelet[2218]: E1213 14:19:40.448507 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:40.448552 kubelet[2218]: W1213 14:19:40.448518 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:40.448552 kubelet[2218]: E1213 14:19:40.448533 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:40.448675 kubelet[2218]: E1213 14:19:40.448664 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:40.448675 kubelet[2218]: W1213 14:19:40.448672 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:40.448752 kubelet[2218]: E1213 14:19:40.448685 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:40.448956 kubelet[2218]: E1213 14:19:40.448926 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:40.448956 kubelet[2218]: W1213 14:19:40.448945 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:40.449067 kubelet[2218]: E1213 14:19:40.448975 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:40.449259 kubelet[2218]: E1213 14:19:40.449241 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:40.449259 kubelet[2218]: W1213 14:19:40.449254 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:40.449348 kubelet[2218]: E1213 14:19:40.449277 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:40.449488 kubelet[2218]: E1213 14:19:40.449474 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:40.449488 kubelet[2218]: W1213 14:19:40.449484 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:40.449556 kubelet[2218]: E1213 14:19:40.449499 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:40.449635 kubelet[2218]: E1213 14:19:40.449626 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:40.449635 kubelet[2218]: W1213 14:19:40.449633 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:40.449703 kubelet[2218]: E1213 14:19:40.449648 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:40.449771 kubelet[2218]: E1213 14:19:40.449758 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:40.449771 kubelet[2218]: W1213 14:19:40.449765 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:40.449853 kubelet[2218]: E1213 14:19:40.449778 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:40.449977 kubelet[2218]: E1213 14:19:40.449962 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:40.449977 kubelet[2218]: W1213 14:19:40.449974 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:40.450091 kubelet[2218]: E1213 14:19:40.449988 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:40.450284 kubelet[2218]: E1213 14:19:40.450263 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:40.450284 kubelet[2218]: W1213 14:19:40.450277 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:40.450385 kubelet[2218]: E1213 14:19:40.450296 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:40.450502 kubelet[2218]: E1213 14:19:40.450485 2218 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:19:40.450502 kubelet[2218]: W1213 14:19:40.450500 2218 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:19:40.450568 kubelet[2218]: E1213 14:19:40.450513 2218 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:19:40.590564 env[1311]: time="2024-12-13T14:19:40.589687521Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:40.607251 env[1311]: time="2024-12-13T14:19:40.607172431Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:40.632000 env[1311]: time="2024-12-13T14:19:40.631920396Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:40.650720 env[1311]: time="2024-12-13T14:19:40.650645361Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:40.651291 env[1311]: time="2024-12-13T14:19:40.651224722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Dec 13 14:19:40.653502 env[1311]: time="2024-12-13T14:19:40.653439111Z" level=info msg="CreateContainer within sandbox \"3da41d5f68afd8308631b4e16b983a97b375567fadca013a2d27b95504bf797b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 14:19:40.795969 env[1311]: time="2024-12-13T14:19:40.795771964Z" level=info msg="CreateContainer within sandbox \"3da41d5f68afd8308631b4e16b983a97b375567fadca013a2d27b95504bf797b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"0e019815553221867298702218a362158f99919d023c57b52a0d7e05c794744e\"" Dec 13 14:19:40.796673 env[1311]: time="2024-12-13T14:19:40.796622506Z" level=info msg="StartContainer for \"0e019815553221867298702218a362158f99919d023c57b52a0d7e05c794744e\"" Dec 13 14:19:40.886859 env[1311]: time="2024-12-13T14:19:40.886686077Z" level=info msg="StartContainer for \"0e019815553221867298702218a362158f99919d023c57b52a0d7e05c794744e\" returns successfully" Dec 13 14:19:40.905252 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e019815553221867298702218a362158f99919d023c57b52a0d7e05c794744e-rootfs.mount: Deactivated successfully. Dec 13 14:19:41.332616 env[1311]: time="2024-12-13T14:19:41.332543629Z" level=info msg="shim disconnected" id=0e019815553221867298702218a362158f99919d023c57b52a0d7e05c794744e Dec 13 14:19:41.332616 env[1311]: time="2024-12-13T14:19:41.332616606Z" level=warning msg="cleaning up after shim disconnected" id=0e019815553221867298702218a362158f99919d023c57b52a0d7e05c794744e namespace=k8s.io Dec 13 14:19:41.332936 env[1311]: time="2024-12-13T14:19:41.332635702Z" level=info msg="cleaning up dead shim" Dec 13 14:19:41.340841 env[1311]: time="2024-12-13T14:19:41.340765605Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:19:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2919 runtime=io.containerd.runc.v2\n" Dec 13 14:19:41.352172 kubelet[2218]: E1213 14:19:41.352125 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:41.354675 env[1311]: time="2024-12-13T14:19:41.354496852Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 14:19:41.366827 kubelet[2218]: I1213 14:19:41.366760 2218 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 14:19:41.367931 kubelet[2218]: E1213 14:19:41.367589 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:41.409000 audit[2937]: NETFILTER_CFG table=filter:95 family=2 entries=17 op=nft_register_rule pid=2937 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:19:41.411870 kernel: kauditd_printk_skb: 8 callbacks suppressed Dec 13 14:19:41.411940 kernel: audit: type=1325 audit(1734099581.409:288): table=filter:95 family=2 entries=17 op=nft_register_rule pid=2937 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:19:41.409000 audit[2937]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffd013653c0 a2=0 a3=7ffd013653ac items=0 ppid=2379 pid=2937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:41.422522 kernel: audit: type=1300 audit(1734099581.409:288): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffd013653c0 a2=0 a3=7ffd013653ac items=0 ppid=2379 pid=2937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:41.422730 kernel: audit: type=1327 audit(1734099581.409:288): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:19:41.409000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:19:41.426000 audit[2937]: NETFILTER_CFG table=nat:96 family=2 entries=19 op=nft_register_chain pid=2937 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:19:41.426000 audit[2937]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffd013653c0 a2=0 a3=7ffd013653ac items=0 ppid=2379 pid=2937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:41.437251 kernel: audit: type=1325 audit(1734099581.426:289): table=nat:96 family=2 entries=19 op=nft_register_chain pid=2937 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:19:41.437402 kernel: audit: type=1300 audit(1734099581.426:289): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffd013653c0 a2=0 a3=7ffd013653ac items=0 ppid=2379 pid=2937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:41.437435 kernel: audit: type=1327 audit(1734099581.426:289): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:19:41.426000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:19:42.271589 kubelet[2218]: E1213 14:19:42.271505 2218 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g44h2" podUID="b485a5b2-c009-42f0-8598-051c15f90fca" Dec 13 14:19:42.354182 kubelet[2218]: E1213 14:19:42.354126 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:43.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.24:22-10.0.0.1:33002 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:43.122477 systemd[1]: Started sshd@7-10.0.0.24:22-10.0.0.1:33002.service. Dec 13 14:19:43.129075 kernel: audit: type=1130 audit(1734099583.121:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.24:22-10.0.0.1:33002 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:43.161000 audit[2943]: USER_ACCT pid=2943 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:19:43.163235 sshd[2943]: Accepted publickey for core from 10.0.0.1 port 33002 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg Dec 13 14:19:43.172049 kernel: audit: type=1101 audit(1734099583.161:291): pid=2943 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:19:43.172238 kernel: audit: type=1103 audit(1734099583.170:292): pid=2943 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:19:43.170000 audit[2943]: CRED_ACQ pid=2943 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:19:43.172453 sshd[2943]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:19:43.180573 kernel: audit: type=1006 audit(1734099583.171:293): pid=2943 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Dec 13 14:19:43.171000 audit[2943]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffca0653fe0 a2=3 a3=0 items=0 ppid=1 pid=2943 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:43.171000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:19:43.185343 systemd-logind[1295]: New session 8 of user core. Dec 13 14:19:43.186253 systemd[1]: Started session-8.scope. Dec 13 14:19:43.191000 audit[2943]: USER_START pid=2943 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:19:43.193000 audit[2946]: CRED_ACQ pid=2946 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:19:43.330001 sshd[2943]: pam_unix(sshd:session): session closed for user core Dec 13 14:19:43.330000 audit[2943]: USER_END pid=2943 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:19:43.334000 audit[2943]: CRED_DISP pid=2943 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:19:43.340000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.24:22-10.0.0.1:33002 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:43.340641 systemd[1]: sshd@7-10.0.0.24:22-10.0.0.1:33002.service: Deactivated successfully. Dec 13 14:19:43.342248 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 14:19:43.343128 systemd-logind[1295]: Session 8 logged out. Waiting for processes to exit. Dec 13 14:19:43.344320 systemd-logind[1295]: Removed session 8. Dec 13 14:19:44.275494 kubelet[2218]: E1213 14:19:44.275442 2218 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g44h2" podUID="b485a5b2-c009-42f0-8598-051c15f90fca" Dec 13 14:19:46.271430 kubelet[2218]: E1213 14:19:46.271367 2218 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g44h2" podUID="b485a5b2-c009-42f0-8598-051c15f90fca" Dec 13 14:19:46.666222 env[1311]: time="2024-12-13T14:19:46.665985324Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:46.668945 env[1311]: time="2024-12-13T14:19:46.668775300Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:46.671388 env[1311]: time="2024-12-13T14:19:46.671354019Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:46.675426 env[1311]: time="2024-12-13T14:19:46.675361314Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:46.675984 env[1311]: time="2024-12-13T14:19:46.675945523Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Dec 13 14:19:46.678145 env[1311]: time="2024-12-13T14:19:46.678102439Z" level=info msg="CreateContainer within sandbox \"3da41d5f68afd8308631b4e16b983a97b375567fadca013a2d27b95504bf797b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 14:19:46.701187 env[1311]: time="2024-12-13T14:19:46.701080837Z" level=info msg="CreateContainer within sandbox \"3da41d5f68afd8308631b4e16b983a97b375567fadca013a2d27b95504bf797b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"45efb109a9d1db369fa034d3b1041771f7cec1a49eff6848233b5e14b9c93c6c\"" Dec 13 14:19:46.701785 env[1311]: time="2024-12-13T14:19:46.701746379Z" level=info msg="StartContainer for \"45efb109a9d1db369fa034d3b1041771f7cec1a49eff6848233b5e14b9c93c6c\"" Dec 13 14:19:47.003499 env[1311]: time="2024-12-13T14:19:47.003381613Z" level=info msg="StartContainer for \"45efb109a9d1db369fa034d3b1041771f7cec1a49eff6848233b5e14b9c93c6c\" returns successfully" Dec 13 14:19:47.366793 kubelet[2218]: E1213 14:19:47.366653 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:48.271338 kubelet[2218]: E1213 14:19:48.271284 2218 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-g44h2" podUID="b485a5b2-c009-42f0-8598-051c15f90fca" Dec 13 14:19:48.283322 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-45efb109a9d1db369fa034d3b1041771f7cec1a49eff6848233b5e14b9c93c6c-rootfs.mount: Deactivated successfully. Dec 13 14:19:48.289323 env[1311]: time="2024-12-13T14:19:48.289255805Z" level=info msg="shim disconnected" id=45efb109a9d1db369fa034d3b1041771f7cec1a49eff6848233b5e14b9c93c6c Dec 13 14:19:48.289323 env[1311]: time="2024-12-13T14:19:48.289305829Z" level=warning msg="cleaning up after shim disconnected" id=45efb109a9d1db369fa034d3b1041771f7cec1a49eff6848233b5e14b9c93c6c namespace=k8s.io Dec 13 14:19:48.289323 env[1311]: time="2024-12-13T14:19:48.289314565Z" level=info msg="cleaning up dead shim" Dec 13 14:19:48.292037 kubelet[2218]: I1213 14:19:48.291983 2218 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 14:19:48.298784 env[1311]: time="2024-12-13T14:19:48.298726228Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:19:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3007 runtime=io.containerd.runc.v2\n" Dec 13 14:19:48.316833 kubelet[2218]: I1213 14:19:48.316739 2218 topology_manager.go:215] "Topology Admit Handler" podUID="0a2e947b-99c4-4ed5-a3f2-f248230d88f2" podNamespace="kube-system" podName="coredns-76f75df574-255rl" Dec 13 14:19:48.318818 kubelet[2218]: I1213 14:19:48.318455 2218 topology_manager.go:215] "Topology Admit Handler" podUID="727d322d-22ee-4e52-af18-8f9ebc3141c1" podNamespace="kube-system" podName="coredns-76f75df574-77dkv" Dec 13 14:19:48.322205 kubelet[2218]: I1213 14:19:48.322163 2218 topology_manager.go:215] "Topology Admit Handler" podUID="237a3d9e-0254-4057-bf82-74077775e376" podNamespace="calico-apiserver" podName="calico-apiserver-67dd9c689b-nvrx4" Dec 13 14:19:48.322363 kubelet[2218]: W1213 14:19:48.322259 2218 reflector.go:539] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Dec 13 14:19:48.322363 kubelet[2218]: E1213 14:19:48.322295 2218 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Dec 13 14:19:48.322363 kubelet[2218]: I1213 14:19:48.322341 2218 topology_manager.go:215] "Topology Admit Handler" podUID="3cc34ed8-34cd-4a26-9a00-bd15703328eb" podNamespace="calico-apiserver" podName="calico-apiserver-67dd9c689b-4dc82" Dec 13 14:19:48.322489 kubelet[2218]: I1213 14:19:48.322467 2218 topology_manager.go:215] "Topology Admit Handler" podUID="1706fd1e-e014-4076-a36c-7345b2980a9c" podNamespace="calico-system" podName="calico-kube-controllers-77d57bc794-t48v6" Dec 13 14:19:48.333799 systemd[1]: Started sshd@8-10.0.0.24:22-10.0.0.1:35912.service. Dec 13 14:19:48.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.24:22-10.0.0.1:35912 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:48.335576 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 13 14:19:48.335664 kernel: audit: type=1130 audit(1734099588.332:299): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.24:22-10.0.0.1:35912 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:48.369000 audit[3020]: USER_ACCT pid=3020 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:19:48.370659 sshd[3020]: Accepted publickey for core from 10.0.0.1 port 35912 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg Dec 13 14:19:48.373320 kubelet[2218]: E1213 14:19:48.373289 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:48.376377 env[1311]: time="2024-12-13T14:19:48.375123850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 14:19:48.377035 kernel: audit: type=1101 audit(1734099588.369:300): pid=3020 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:19:48.376000 audit[3020]: CRED_ACQ pid=3020 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:19:48.378065 sshd[3020]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:19:48.382764 systemd-logind[1295]: New session 9 of user core. Dec 13 14:19:48.383517 systemd[1]: Started session-9.scope. Dec 13 14:19:48.385367 kernel: audit: type=1103 audit(1734099588.376:301): pid=3020 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:19:48.385437 kernel: audit: type=1006 audit(1734099588.376:302): pid=3020 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Dec 13 14:19:48.385461 kernel: audit: type=1300 audit(1734099588.376:302): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffff044cd30 a2=3 a3=0 items=0 ppid=1 pid=3020 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:48.376000 audit[3020]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffff044cd30 a2=3 a3=0 items=0 ppid=1 pid=3020 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:48.376000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:19:48.391706 kernel: audit: type=1327 audit(1734099588.376:302): proctitle=737368643A20636F7265205B707269765D Dec 13 14:19:48.391752 kernel: audit: type=1105 audit(1734099588.385:303): pid=3020 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:19:48.385000 audit[3020]: USER_START pid=3020 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:19:48.389000 audit[3023]: CRED_ACQ pid=3023 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:19:48.401120 kernel: audit: type=1103 audit(1734099588.389:304): pid=3023 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:19:48.407122 kubelet[2218]: I1213 14:19:48.407089 2218 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fff5\" (UniqueName: \"kubernetes.io/projected/727d322d-22ee-4e52-af18-8f9ebc3141c1-kube-api-access-7fff5\") pod \"coredns-76f75df574-77dkv\" (UID: \"727d322d-22ee-4e52-af18-8f9ebc3141c1\") " pod="kube-system/coredns-76f75df574-77dkv" Dec 13 14:19:48.407122 kubelet[2218]: I1213 14:19:48.407129 2218 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3cc34ed8-34cd-4a26-9a00-bd15703328eb-calico-apiserver-certs\") pod \"calico-apiserver-67dd9c689b-4dc82\" (UID: \"3cc34ed8-34cd-4a26-9a00-bd15703328eb\") " pod="calico-apiserver/calico-apiserver-67dd9c689b-4dc82" Dec 13 14:19:48.407302 kubelet[2218]: I1213 14:19:48.407171 2218 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gthh9\" (UniqueName: \"kubernetes.io/projected/0a2e947b-99c4-4ed5-a3f2-f248230d88f2-kube-api-access-gthh9\") pod \"coredns-76f75df574-255rl\" (UID: \"0a2e947b-99c4-4ed5-a3f2-f248230d88f2\") " pod="kube-system/coredns-76f75df574-255rl" Dec 13 14:19:48.407370 kubelet[2218]: I1213 14:19:48.407318 2218 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/727d322d-22ee-4e52-af18-8f9ebc3141c1-config-volume\") pod \"coredns-76f75df574-77dkv\" (UID: \"727d322d-22ee-4e52-af18-8f9ebc3141c1\") " pod="kube-system/coredns-76f75df574-77dkv" Dec 13 14:19:48.407370 kubelet[2218]: I1213 14:19:48.407359 2218 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cd7dd\" (UniqueName: \"kubernetes.io/projected/1706fd1e-e014-4076-a36c-7345b2980a9c-kube-api-access-cd7dd\") pod \"calico-kube-controllers-77d57bc794-t48v6\" (UID: \"1706fd1e-e014-4076-a36c-7345b2980a9c\") " pod="calico-system/calico-kube-controllers-77d57bc794-t48v6" Dec 13 14:19:48.407432 kubelet[2218]: I1213 14:19:48.407398 2218 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1706fd1e-e014-4076-a36c-7345b2980a9c-tigera-ca-bundle\") pod \"calico-kube-controllers-77d57bc794-t48v6\" (UID: \"1706fd1e-e014-4076-a36c-7345b2980a9c\") " pod="calico-system/calico-kube-controllers-77d57bc794-t48v6" Dec 13 14:19:48.407461 kubelet[2218]: I1213 14:19:48.407431 2218 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/237a3d9e-0254-4057-bf82-74077775e376-calico-apiserver-certs\") pod \"calico-apiserver-67dd9c689b-nvrx4\" (UID: \"237a3d9e-0254-4057-bf82-74077775e376\") " pod="calico-apiserver/calico-apiserver-67dd9c689b-nvrx4" Dec 13 14:19:48.407522 kubelet[2218]: I1213 14:19:48.407482 2218 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0a2e947b-99c4-4ed5-a3f2-f248230d88f2-config-volume\") pod \"coredns-76f75df574-255rl\" (UID: \"0a2e947b-99c4-4ed5-a3f2-f248230d88f2\") " pod="kube-system/coredns-76f75df574-255rl" Dec 13 14:19:48.407582 kubelet[2218]: I1213 14:19:48.407562 2218 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dl89\" (UniqueName: \"kubernetes.io/projected/3cc34ed8-34cd-4a26-9a00-bd15703328eb-kube-api-access-9dl89\") pod \"calico-apiserver-67dd9c689b-4dc82\" (UID: \"3cc34ed8-34cd-4a26-9a00-bd15703328eb\") " pod="calico-apiserver/calico-apiserver-67dd9c689b-4dc82" Dec 13 14:19:48.407609 kubelet[2218]: I1213 14:19:48.407599 2218 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfrmr\" (UniqueName: \"kubernetes.io/projected/237a3d9e-0254-4057-bf82-74077775e376-kube-api-access-sfrmr\") pod \"calico-apiserver-67dd9c689b-nvrx4\" (UID: \"237a3d9e-0254-4057-bf82-74077775e376\") " pod="calico-apiserver/calico-apiserver-67dd9c689b-nvrx4" Dec 13 14:19:48.586133 sshd[3020]: pam_unix(sshd:session): session closed for user core Dec 13 14:19:48.586000 audit[3020]: USER_END pid=3020 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:19:48.587000 audit[3020]: CRED_DISP pid=3020 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:19:48.592578 systemd[1]: sshd@8-10.0.0.24:22-10.0.0.1:35912.service: Deactivated successfully. Dec 13 14:19:48.593617 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 14:19:48.593963 systemd-logind[1295]: Session 9 logged out. Waiting for processes to exit. Dec 13 14:19:48.594807 systemd-logind[1295]: Removed session 9. Dec 13 14:19:48.595776 kernel: audit: type=1106 audit(1734099588.586:305): pid=3020 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:19:48.595956 kernel: audit: type=1104 audit(1734099588.587:306): pid=3020 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:19:48.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.24:22-10.0.0.1:35912 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:48.633108 env[1311]: time="2024-12-13T14:19:48.633042854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67dd9c689b-4dc82,Uid:3cc34ed8-34cd-4a26-9a00-bd15703328eb,Namespace:calico-apiserver,Attempt:0,}" Dec 13 14:19:48.635215 env[1311]: time="2024-12-13T14:19:48.635156436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67dd9c689b-nvrx4,Uid:237a3d9e-0254-4057-bf82-74077775e376,Namespace:calico-apiserver,Attempt:0,}" Dec 13 14:19:48.640155 env[1311]: time="2024-12-13T14:19:48.640103526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77d57bc794-t48v6,Uid:1706fd1e-e014-4076-a36c-7345b2980a9c,Namespace:calico-system,Attempt:0,}" Dec 13 14:19:49.346482 env[1311]: time="2024-12-13T14:19:49.346392858Z" level=error msg="Failed to destroy network for sandbox \"8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:19:49.349899 env[1311]: time="2024-12-13T14:19:49.346882548Z" level=error msg="encountered an error cleaning up failed sandbox \"8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:19:49.349899 env[1311]: time="2024-12-13T14:19:49.346940307Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67dd9c689b-4dc82,Uid:3cc34ed8-34cd-4a26-9a00-bd15703328eb,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:19:49.349443 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e-shm.mount: Deactivated successfully. Dec 13 14:19:49.350338 kubelet[2218]: E1213 14:19:49.347284 2218 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:19:49.350338 kubelet[2218]: E1213 14:19:49.347350 2218 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67dd9c689b-4dc82" Dec 13 14:19:49.350338 kubelet[2218]: E1213 14:19:49.347370 2218 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67dd9c689b-4dc82" Dec 13 14:19:49.350503 kubelet[2218]: E1213 14:19:49.347426 2218 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67dd9c689b-4dc82_calico-apiserver(3cc34ed8-34cd-4a26-9a00-bd15703328eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67dd9c689b-4dc82_calico-apiserver(3cc34ed8-34cd-4a26-9a00-bd15703328eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67dd9c689b-4dc82" podUID="3cc34ed8-34cd-4a26-9a00-bd15703328eb" Dec 13 14:19:49.369742 env[1311]: time="2024-12-13T14:19:49.369634762Z" level=error msg="Failed to destroy network for sandbox \"8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:19:49.372028 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3-shm.mount: Deactivated successfully. Dec 13 14:19:49.375787 env[1311]: time="2024-12-13T14:19:49.373533359Z" level=error msg="encountered an error cleaning up failed sandbox \"8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:19:49.375787 env[1311]: time="2024-12-13T14:19:49.373595846Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67dd9c689b-nvrx4,Uid:237a3d9e-0254-4057-bf82-74077775e376,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:19:49.375972 kubelet[2218]: E1213 14:19:49.373766 2218 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:19:49.375972 kubelet[2218]: E1213 14:19:49.373827 2218 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67dd9c689b-nvrx4" Dec 13 14:19:49.375972 kubelet[2218]: E1213 14:19:49.373852 2218 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67dd9c689b-nvrx4" Dec 13 14:19:49.376400 kubelet[2218]: E1213 14:19:49.373933 2218 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67dd9c689b-nvrx4_calico-apiserver(237a3d9e-0254-4057-bf82-74077775e376)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67dd9c689b-nvrx4_calico-apiserver(237a3d9e-0254-4057-bf82-74077775e376)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67dd9c689b-nvrx4" podUID="237a3d9e-0254-4057-bf82-74077775e376" Dec 13 14:19:49.376400 kubelet[2218]: I1213 14:19:49.375693 2218 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3" Dec 13 14:19:49.376483 env[1311]: time="2024-12-13T14:19:49.376378797Z" level=info msg="StopPodSandbox for \"8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3\"" Dec 13 14:19:49.378626 kubelet[2218]: I1213 14:19:49.378598 2218 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e" Dec 13 14:19:49.380287 env[1311]: time="2024-12-13T14:19:49.379109950Z" level=info msg="StopPodSandbox for \"8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e\"" Dec 13 14:19:49.382049 env[1311]: time="2024-12-13T14:19:49.381951641Z" level=error msg="Failed to destroy network for sandbox \"1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:19:49.385269 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9-shm.mount: Deactivated successfully. Dec 13 14:19:49.386767 env[1311]: time="2024-12-13T14:19:49.386715996Z" level=error msg="encountered an error cleaning up failed sandbox \"1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:19:49.386880 env[1311]: time="2024-12-13T14:19:49.386786007Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77d57bc794-t48v6,Uid:1706fd1e-e014-4076-a36c-7345b2980a9c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:19:49.387084 kubelet[2218]: E1213 14:19:49.387057 2218 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:19:49.387159 kubelet[2218]: E1213 14:19:49.387115 2218 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-77d57bc794-t48v6" Dec 13 14:19:49.387159 kubelet[2218]: E1213 14:19:49.387135 2218 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-77d57bc794-t48v6" Dec 13 14:19:49.387233 kubelet[2218]: E1213 14:19:49.387193 2218 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-77d57bc794-t48v6_calico-system(1706fd1e-e014-4076-a36c-7345b2980a9c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-77d57bc794-t48v6_calico-system(1706fd1e-e014-4076-a36c-7345b2980a9c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-77d57bc794-t48v6" podUID="1706fd1e-e014-4076-a36c-7345b2980a9c" Dec 13 14:19:49.407832 env[1311]: time="2024-12-13T14:19:49.407755350Z" level=error msg="StopPodSandbox for \"8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3\" failed" error="failed to destroy network for sandbox \"8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:19:49.408395 kubelet[2218]: E1213 14:19:49.408352 2218 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3" Dec 13 14:19:49.408481 kubelet[2218]: E1213 14:19:49.408449 2218 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3"} Dec 13 14:19:49.408530 kubelet[2218]: E1213 14:19:49.408485 2218 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"237a3d9e-0254-4057-bf82-74077775e376\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 14:19:49.408530 kubelet[2218]: E1213 14:19:49.408518 2218 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"237a3d9e-0254-4057-bf82-74077775e376\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67dd9c689b-nvrx4" podUID="237a3d9e-0254-4057-bf82-74077775e376" Dec 13 14:19:49.416425 env[1311]: time="2024-12-13T14:19:49.416326539Z" level=error msg="StopPodSandbox for \"8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e\" failed" error="failed to destroy network for sandbox \"8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:19:49.416668 kubelet[2218]: E1213 14:19:49.416638 2218 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e" Dec 13 14:19:49.416729 kubelet[2218]: E1213 14:19:49.416688 2218 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e"} Dec 13 14:19:49.416763 kubelet[2218]: E1213 14:19:49.416736 2218 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3cc34ed8-34cd-4a26-9a00-bd15703328eb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 14:19:49.416857 kubelet[2218]: E1213 14:19:49.416771 2218 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3cc34ed8-34cd-4a26-9a00-bd15703328eb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67dd9c689b-4dc82" podUID="3cc34ed8-34cd-4a26-9a00-bd15703328eb" Dec 13 14:19:49.525378 kubelet[2218]: E1213 14:19:49.525328 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:49.525953 env[1311]: time="2024-12-13T14:19:49.525906867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-255rl,Uid:0a2e947b-99c4-4ed5-a3f2-f248230d88f2,Namespace:kube-system,Attempt:0,}" Dec 13 14:19:49.542499 kubelet[2218]: E1213 14:19:49.542204 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:49.543236 env[1311]: time="2024-12-13T14:19:49.543187667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-77dkv,Uid:727d322d-22ee-4e52-af18-8f9ebc3141c1,Namespace:kube-system,Attempt:0,}" Dec 13 14:19:49.589198 env[1311]: time="2024-12-13T14:19:49.589130677Z" level=error msg="Failed to destroy network for sandbox \"c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:19:49.589515 env[1311]: time="2024-12-13T14:19:49.589485354Z" level=error msg="encountered an error cleaning up failed sandbox \"c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:19:49.589576 env[1311]: time="2024-12-13T14:19:49.589537953Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-255rl,Uid:0a2e947b-99c4-4ed5-a3f2-f248230d88f2,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:19:49.589797 kubelet[2218]: E1213 14:19:49.589777 2218 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:19:49.589888 kubelet[2218]: E1213 14:19:49.589834 2218 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-255rl" Dec 13 14:19:49.589888 kubelet[2218]: E1213 14:19:49.589854 2218 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-255rl" Dec 13 14:19:49.589948 kubelet[2218]: E1213 14:19:49.589919 2218 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-255rl_kube-system(0a2e947b-99c4-4ed5-a3f2-f248230d88f2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-255rl_kube-system(0a2e947b-99c4-4ed5-a3f2-f248230d88f2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-255rl" podUID="0a2e947b-99c4-4ed5-a3f2-f248230d88f2" Dec 13 14:19:49.606378 env[1311]: time="2024-12-13T14:19:49.606169462Z" level=error msg="Failed to destroy network for sandbox \"7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:19:49.606700 env[1311]: time="2024-12-13T14:19:49.606646298Z" level=error msg="encountered an error cleaning up failed sandbox \"7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:19:49.606785 env[1311]: time="2024-12-13T14:19:49.606710780Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-77dkv,Uid:727d322d-22ee-4e52-af18-8f9ebc3141c1,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:19:49.609087 kubelet[2218]: E1213 14:19:49.607119 2218 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:19:49.609087 kubelet[2218]: E1213 14:19:49.607201 2218 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-77dkv" Dec 13 14:19:49.609087 kubelet[2218]: E1213 14:19:49.607235 2218 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-77dkv" Dec 13 14:19:49.609294 kubelet[2218]: E1213 14:19:49.607308 2218 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-77dkv_kube-system(727d322d-22ee-4e52-af18-8f9ebc3141c1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-77dkv_kube-system(727d322d-22ee-4e52-af18-8f9ebc3141c1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-77dkv" podUID="727d322d-22ee-4e52-af18-8f9ebc3141c1" Dec 13 14:19:50.274847 env[1311]: time="2024-12-13T14:19:50.274775326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g44h2,Uid:b485a5b2-c009-42f0-8598-051c15f90fca,Namespace:calico-system,Attempt:0,}" Dec 13 14:19:50.285499 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a-shm.mount: Deactivated successfully. Dec 13 14:19:50.382357 kubelet[2218]: I1213 14:19:50.382170 2218 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91" Dec 13 14:19:50.383326 env[1311]: time="2024-12-13T14:19:50.383268472Z" level=info msg="StopPodSandbox for \"7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91\"" Dec 13 14:19:50.385097 kubelet[2218]: I1213 14:19:50.384378 2218 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a" Dec 13 14:19:50.385227 env[1311]: time="2024-12-13T14:19:50.384790602Z" level=info msg="StopPodSandbox for \"c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a\"" Dec 13 14:19:50.386325 kubelet[2218]: I1213 14:19:50.386302 2218 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9" Dec 13 14:19:50.386777 env[1311]: time="2024-12-13T14:19:50.386739064Z" level=info msg="StopPodSandbox for \"1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9\"" Dec 13 14:19:50.452648 env[1311]: time="2024-12-13T14:19:50.452562333Z" level=error msg="StopPodSandbox for \"1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9\" failed" error="failed to destroy network for sandbox \"1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:19:50.452880 kubelet[2218]: E1213 14:19:50.452861 2218 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9" Dec 13 14:19:50.452952 kubelet[2218]: E1213 14:19:50.452908 2218 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9"} Dec 13 14:19:50.452952 kubelet[2218]: E1213 14:19:50.452946 2218 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1706fd1e-e014-4076-a36c-7345b2980a9c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 14:19:50.453096 kubelet[2218]: E1213 14:19:50.452982 2218 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1706fd1e-e014-4076-a36c-7345b2980a9c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-77d57bc794-t48v6" podUID="1706fd1e-e014-4076-a36c-7345b2980a9c" Dec 13 14:19:50.471773 env[1311]: time="2024-12-13T14:19:50.471680672Z" level=error msg="StopPodSandbox for \"c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a\" failed" error="failed to destroy network for sandbox \"c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:19:50.472089 kubelet[2218]: E1213 14:19:50.472059 2218 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a" Dec 13 14:19:50.472194 kubelet[2218]: E1213 14:19:50.472123 2218 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a"} Dec 13 14:19:50.472194 kubelet[2218]: E1213 14:19:50.472187 2218 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0a2e947b-99c4-4ed5-a3f2-f248230d88f2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 14:19:50.472290 kubelet[2218]: E1213 14:19:50.472219 2218 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0a2e947b-99c4-4ed5-a3f2-f248230d88f2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-255rl" podUID="0a2e947b-99c4-4ed5-a3f2-f248230d88f2" Dec 13 14:19:50.494649 env[1311]: time="2024-12-13T14:19:50.494556521Z" level=error msg="StopPodSandbox for \"7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91\" failed" error="failed to destroy network for sandbox \"7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:19:50.494957 kubelet[2218]: E1213 14:19:50.494919 2218 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91" Dec 13 14:19:50.495054 kubelet[2218]: E1213 14:19:50.494985 2218 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91"} Dec 13 14:19:50.495087 kubelet[2218]: E1213 14:19:50.495068 2218 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"727d322d-22ee-4e52-af18-8f9ebc3141c1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 14:19:50.495155 kubelet[2218]: E1213 14:19:50.495110 2218 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"727d322d-22ee-4e52-af18-8f9ebc3141c1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-77dkv" podUID="727d322d-22ee-4e52-af18-8f9ebc3141c1" Dec 13 14:19:50.910518 env[1311]: time="2024-12-13T14:19:50.910421417Z" level=error msg="Failed to destroy network for sandbox \"be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:19:50.913759 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8-shm.mount: Deactivated successfully. Dec 13 14:19:50.915228 env[1311]: time="2024-12-13T14:19:50.915145725Z" level=error msg="encountered an error cleaning up failed sandbox \"be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:19:50.915340 env[1311]: time="2024-12-13T14:19:50.915227209Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g44h2,Uid:b485a5b2-c009-42f0-8598-051c15f90fca,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:19:50.915579 kubelet[2218]: E1213 14:19:50.915546 2218 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:19:50.915667 kubelet[2218]: E1213 14:19:50.915627 2218 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-g44h2" Dec 13 14:19:50.915667 kubelet[2218]: E1213 14:19:50.915656 2218 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-g44h2" Dec 13 14:19:50.915746 kubelet[2218]: E1213 14:19:50.915732 2218 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-g44h2_calico-system(b485a5b2-c009-42f0-8598-051c15f90fca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-g44h2_calico-system(b485a5b2-c009-42f0-8598-051c15f90fca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-g44h2" podUID="b485a5b2-c009-42f0-8598-051c15f90fca" Dec 13 14:19:51.389169 kubelet[2218]: I1213 14:19:51.389114 2218 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8" Dec 13 14:19:51.389817 env[1311]: time="2024-12-13T14:19:51.389768457Z" level=info msg="StopPodSandbox for \"be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8\"" Dec 13 14:19:51.418083 env[1311]: time="2024-12-13T14:19:51.417980818Z" level=error msg="StopPodSandbox for \"be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8\" failed" error="failed to destroy network for sandbox \"be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:19:51.418319 kubelet[2218]: E1213 14:19:51.418291 2218 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8" Dec 13 14:19:51.418393 kubelet[2218]: E1213 14:19:51.418344 2218 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8"} Dec 13 14:19:51.418393 kubelet[2218]: E1213 14:19:51.418387 2218 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b485a5b2-c009-42f0-8598-051c15f90fca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 14:19:51.418523 kubelet[2218]: E1213 14:19:51.418426 2218 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b485a5b2-c009-42f0-8598-051c15f90fca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-g44h2" podUID="b485a5b2-c009-42f0-8598-051c15f90fca" Dec 13 14:19:53.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.24:22-10.0.0.1:35926 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:53.589658 systemd[1]: Started sshd@9-10.0.0.24:22-10.0.0.1:35926.service. Dec 13 14:19:53.601672 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 14:19:53.602051 kernel: audit: type=1130 audit(1734099593.588:308): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.24:22-10.0.0.1:35926 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:54.780000 audit[3415]: USER_ACCT pid=3415 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:19:54.813434 systemd-logind[1295]: New session 10 of user core. Dec 13 14:19:54.814313 kernel: audit: type=1101 audit(1734099594.780:309): pid=3415 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:19:54.814372 sshd[3415]: Accepted publickey for core from 10.0.0.1 port 35926 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg Dec 13 14:19:54.791000 audit[3415]: CRED_ACQ pid=3415 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:19:54.808867 sshd[3415]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:19:54.813997 systemd[1]: Started session-10.scope. Dec 13 14:19:54.822624 kernel: audit: type=1103 audit(1734099594.791:310): pid=3415 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:19:54.822748 kernel: audit: type=1006 audit(1734099594.791:311): pid=3415 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Dec 13 14:19:54.822842 kernel: audit: type=1300 audit(1734099594.791:311): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffeace7f30 a2=3 a3=0 items=0 ppid=1 pid=3415 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:54.791000 audit[3415]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffeace7f30 a2=3 a3=0 items=0 ppid=1 pid=3415 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:54.791000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:19:54.828902 kernel: audit: type=1327 audit(1734099594.791:311): proctitle=737368643A20636F7265205B707269765D Dec 13 14:19:54.819000 audit[3415]: USER_START pid=3415 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:19:54.833868 kernel: audit: type=1105 audit(1734099594.819:312): pid=3415 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:19:54.833939 kernel: audit: type=1103 audit(1734099594.822:313): pid=3418 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:19:54.822000 audit[3418]: CRED_ACQ pid=3418 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:19:54.882183 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3801927562.mount: Deactivated successfully. Dec 13 14:19:55.661540 sshd[3415]: pam_unix(sshd:session): session closed for user core Dec 13 14:19:55.661000 audit[3415]: USER_END pid=3415 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:19:55.664824 systemd[1]: sshd@9-10.0.0.24:22-10.0.0.1:35926.service: Deactivated successfully. Dec 13 14:19:55.666407 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 14:19:55.666911 systemd-logind[1295]: Session 10 logged out. Waiting for processes to exit. Dec 13 14:19:55.668051 kernel: audit: type=1106 audit(1734099595.661:314): pid=3415 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:19:55.662000 audit[3415]: CRED_DISP pid=3415 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:19:55.668446 systemd-logind[1295]: Removed session 10. Dec 13 14:19:55.664000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.24:22-10.0.0.1:35926 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:19:55.673140 kernel: audit: type=1104 audit(1734099595.662:315): pid=3415 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:19:55.884329 env[1311]: time="2024-12-13T14:19:55.884220248Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:56.085493 env[1311]: time="2024-12-13T14:19:56.085426064Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:56.091541 env[1311]: time="2024-12-13T14:19:56.091470455Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:56.143334 env[1311]: time="2024-12-13T14:19:56.143223568Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:19:56.143776 env[1311]: time="2024-12-13T14:19:56.143700734Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Dec 13 14:19:56.162039 env[1311]: time="2024-12-13T14:19:56.161876338Z" level=info msg="CreateContainer within sandbox \"3da41d5f68afd8308631b4e16b983a97b375567fadca013a2d27b95504bf797b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 14:19:57.448638 env[1311]: time="2024-12-13T14:19:57.448562365Z" level=info msg="CreateContainer within sandbox \"3da41d5f68afd8308631b4e16b983a97b375567fadca013a2d27b95504bf797b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a225953aa7e35cdcb83cc9f48d2d7f35213192937909ed0bdf6e6394cd45b4e6\"" Dec 13 14:19:57.449181 env[1311]: time="2024-12-13T14:19:57.449138156Z" level=info msg="StartContainer for \"a225953aa7e35cdcb83cc9f48d2d7f35213192937909ed0bdf6e6394cd45b4e6\"" Dec 13 14:19:57.612049 env[1311]: time="2024-12-13T14:19:57.611959298Z" level=info msg="StartContainer for \"a225953aa7e35cdcb83cc9f48d2d7f35213192937909ed0bdf6e6394cd45b4e6\" returns successfully" Dec 13 14:19:57.633616 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 14:19:57.633710 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 14:19:57.743634 kubelet[2218]: E1213 14:19:57.743596 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:58.745691 kubelet[2218]: E1213 14:19:58.745646 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:19:59.511800 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 14:19:59.511992 kernel: audit: type=1400 audit(1734099599.505:317): avc: denied { write } for pid=3588 comm="tee" name="fd" dev="proc" ino=23408 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:19:59.505000 audit[3588]: AVC avc: denied { write } for pid=3588 comm="tee" name="fd" dev="proc" ino=23408 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:19:59.505000 audit[3588]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe66467a2b a2=241 a3=1b6 items=1 ppid=3559 pid=3588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:59.518049 kernel: audit: type=1300 audit(1734099599.505:317): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe66467a2b a2=241 a3=1b6 items=1 ppid=3559 pid=3588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:59.505000 audit: CWD cwd="/etc/service/enabled/bird/log" Dec 13 14:19:59.542805 kernel: audit: type=1307 audit(1734099599.505:317): cwd="/etc/service/enabled/bird/log" Dec 13 14:19:59.542946 kernel: audit: type=1302 audit(1734099599.505:317): item=0 name="/dev/fd/63" inode=23405 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:19:59.505000 audit: PATH item=0 name="/dev/fd/63" inode=23405 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:19:59.550591 kernel: audit: type=1327 audit(1734099599.505:317): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:19:59.550708 kernel: audit: type=1400 audit(1734099599.523:318): avc: denied { write } for pid=3617 comm="tee" name="fd" dev="proc" ino=26023 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:19:59.505000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:19:59.523000 audit[3617]: AVC avc: denied { write } for pid=3617 comm="tee" name="fd" dev="proc" ino=26023 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:19:59.556404 kernel: audit: type=1300 audit(1734099599.523:318): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe71b26a2a a2=241 a3=1b6 items=1 ppid=3572 pid=3617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:59.523000 audit[3617]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe71b26a2a a2=241 a3=1b6 items=1 ppid=3572 pid=3617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:59.558164 kernel: audit: type=1307 audit(1734099599.523:318): cwd="/etc/service/enabled/confd/log" Dec 13 14:19:59.523000 audit: CWD cwd="/etc/service/enabled/confd/log" Dec 13 14:19:59.562190 kernel: audit: type=1302 audit(1734099599.523:318): item=0 name="/dev/fd/63" inode=26018 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:19:59.523000 audit: PATH item=0 name="/dev/fd/63" inode=26018 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:19:59.523000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:19:59.605901 kernel: audit: type=1327 audit(1734099599.523:318): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:19:59.523000 audit[3603]: AVC avc: denied { write } for pid=3603 comm="tee" name="fd" dev="proc" ino=24966 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:19:59.523000 audit[3603]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe02149a1b a2=241 a3=1b6 items=1 ppid=3578 pid=3603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:59.523000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Dec 13 14:19:59.523000 audit: PATH item=0 name="/dev/fd/63" inode=24280 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:19:59.523000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:19:59.536000 audit[3605]: AVC avc: denied { write } for pid=3605 comm="tee" name="fd" dev="proc" ino=26029 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:19:59.536000 audit[3605]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc9fd35a2a a2=241 a3=1b6 items=1 ppid=3580 pid=3605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:59.536000 audit: CWD cwd="/etc/service/enabled/bird6/log" Dec 13 14:19:59.536000 audit: PATH item=0 name="/dev/fd/63" inode=24957 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:19:59.536000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:19:59.540000 audit[3613]: AVC avc: denied { write } for pid=3613 comm="tee" name="fd" dev="proc" ino=26033 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:19:59.540000 audit[3613]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffdd56bca2a a2=241 a3=1b6 items=1 ppid=3561 pid=3613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:59.540000 audit: CWD cwd="/etc/service/enabled/felix/log" Dec 13 14:19:59.540000 audit: PATH item=0 name="/dev/fd/63" inode=23409 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:19:59.540000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:19:59.560000 audit[3636]: AVC avc: denied { write } for pid=3636 comm="tee" name="fd" dev="proc" ino=24970 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:19:59.560000 audit[3636]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffca953ca2c a2=241 a3=1b6 items=1 ppid=3573 pid=3636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:59.560000 audit: CWD cwd="/etc/service/enabled/cni/log" Dec 13 14:19:59.560000 audit: PATH item=0 name="/dev/fd/63" inode=24284 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:19:59.560000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:19:59.564000 audit[3641]: AVC avc: denied { write } for pid=3641 comm="tee" name="fd" dev="proc" ino=24296 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:19:59.564000 audit[3641]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc5fb74a1a a2=241 a3=1b6 items=1 ppid=3575 pid=3641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:59.564000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Dec 13 14:19:59.564000 audit: PATH item=0 name="/dev/fd/63" inode=26040 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:19:59.564000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:19:59.794000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.794000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.794000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.794000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.794000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.794000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.794000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.794000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.794000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.794000 audit: BPF prog-id=10 op=LOAD Dec 13 14:19:59.794000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffef3e73ff0 a2=98 a3=3 items=0 ppid=3577 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:59.794000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:19:59.794000 audit: BPF prog-id=10 op=UNLOAD Dec 13 14:19:59.794000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.794000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.794000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.794000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.794000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.794000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.794000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.794000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.794000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.794000 audit: BPF prog-id=11 op=LOAD Dec 13 14:19:59.794000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffef3e73dd0 a2=74 a3=540051 items=0 ppid=3577 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:59.794000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:19:59.795000 audit: BPF prog-id=11 op=UNLOAD Dec 13 14:19:59.795000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.795000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.795000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.795000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.795000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.795000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.795000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.795000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.795000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.795000 audit: BPF prog-id=12 op=LOAD Dec 13 14:19:59.795000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffef3e73e00 a2=94 a3=2 items=0 ppid=3577 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:59.795000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:19:59.795000 audit: BPF prog-id=12 op=UNLOAD Dec 13 14:19:59.902000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.902000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.902000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.902000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.902000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.902000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.902000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.902000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.902000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.902000 audit: BPF prog-id=13 op=LOAD Dec 13 14:19:59.902000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffef3e73cc0 a2=40 a3=1 items=0 ppid=3577 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:59.902000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:19:59.902000 audit: BPF prog-id=13 op=UNLOAD Dec 13 14:19:59.902000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.902000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffef3e73d90 a2=50 a3=7ffef3e73e70 items=0 ppid=3577 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:59.902000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:19:59.910000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.910000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffef3e73cd0 a2=28 a3=0 items=0 ppid=3577 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:59.910000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:19:59.910000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.910000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffef3e73d00 a2=28 a3=0 items=0 ppid=3577 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:59.910000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:19:59.910000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.910000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffef3e73c10 a2=28 a3=0 items=0 ppid=3577 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:59.910000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:19:59.910000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.910000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffef3e73d20 a2=28 a3=0 items=0 ppid=3577 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:59.910000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:19:59.910000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.910000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffef3e73d00 a2=28 a3=0 items=0 ppid=3577 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:59.910000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:19:59.910000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.910000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffef3e73cf0 a2=28 a3=0 items=0 ppid=3577 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:59.910000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:19:59.910000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.910000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffef3e73d20 a2=28 a3=0 items=0 ppid=3577 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:59.910000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:19:59.910000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.910000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffef3e73d00 a2=28 a3=0 items=0 ppid=3577 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:59.910000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:19:59.910000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.910000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffef3e73d20 a2=28 a3=0 items=0 ppid=3577 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:59.910000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:19:59.910000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.910000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffef3e73cf0 a2=28 a3=0 items=0 ppid=3577 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:59.910000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:19:59.910000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.910000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffef3e73d60 a2=28 a3=0 items=0 ppid=3577 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:59.910000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:19:59.910000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.910000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffef3e73b10 a2=50 a3=1 items=0 ppid=3577 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:59.910000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:19:59.910000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.910000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.910000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.910000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.910000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.910000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.910000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.910000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.910000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.910000 audit: BPF prog-id=14 op=LOAD Dec 13 14:19:59.910000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffef3e73b10 a2=94 a3=5 items=0 ppid=3577 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:59.910000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:19:59.910000 audit: BPF prog-id=14 op=UNLOAD Dec 13 14:19:59.910000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.910000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffef3e73bc0 a2=50 a3=1 items=0 ppid=3577 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:59.910000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:19:59.910000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.910000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffef3e73ce0 a2=4 a3=38 items=0 ppid=3577 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:59.910000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:19:59.910000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.910000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.910000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.910000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.910000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.910000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.910000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.910000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.910000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.910000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.910000 audit[3675]: AVC avc: denied { confidentiality } for pid=3675 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 14:19:59.910000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffef3e73d30 a2=94 a3=6 items=0 ppid=3577 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:59.910000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:19:59.910000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.910000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.910000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.910000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.910000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.910000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.910000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.910000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.910000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.910000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.910000 audit[3675]: AVC avc: denied { confidentiality } for pid=3675 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 14:19:59.910000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffef3e734e0 a2=94 a3=83 items=0 ppid=3577 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:59.910000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:19:59.911000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.911000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.911000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.911000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.911000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.911000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.911000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.911000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.911000 audit[3675]: AVC avc: denied { perfmon } for pid=3675 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.911000 audit[3675]: AVC avc: denied { bpf } for pid=3675 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.911000 audit[3675]: AVC avc: denied { confidentiality } for pid=3675 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 14:19:59.911000 audit[3675]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffef3e734e0 a2=94 a3=83 items=0 ppid=3577 pid=3675 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:59.911000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:19:59.917000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.917000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.917000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.917000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.917000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.917000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.917000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.917000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.917000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.917000 audit: BPF prog-id=15 op=LOAD Dec 13 14:19:59.917000 audit[3678]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc172bedf0 a2=98 a3=1999999999999999 items=0 ppid=3577 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:59.917000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 14:19:59.917000 audit: BPF prog-id=15 op=UNLOAD Dec 13 14:19:59.917000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.917000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.917000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.917000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.917000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.917000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.917000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.917000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.917000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.917000 audit: BPF prog-id=16 op=LOAD Dec 13 14:19:59.917000 audit[3678]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc172becd0 a2=74 a3=ffff items=0 ppid=3577 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:59.917000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 14:19:59.917000 audit: BPF prog-id=16 op=UNLOAD Dec 13 14:19:59.917000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.917000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.917000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.917000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.917000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.917000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.917000 audit[3678]: AVC avc: denied { perfmon } for pid=3678 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.917000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.917000 audit[3678]: AVC avc: denied { bpf } for pid=3678 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.917000 audit: BPF prog-id=17 op=LOAD Dec 13 14:19:59.917000 audit[3678]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc172bed10 a2=40 a3=7ffc172beef0 items=0 ppid=3577 pid=3678 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:59.917000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 14:19:59.918000 audit: BPF prog-id=17 op=UNLOAD Dec 13 14:19:59.955328 systemd-networkd[1087]: vxlan.calico: Link UP Dec 13 14:19:59.955338 systemd-networkd[1087]: vxlan.calico: Gained carrier Dec 13 14:19:59.964000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.964000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.964000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.964000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.964000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.964000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.964000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.964000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.964000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.964000 audit: BPF prog-id=18 op=LOAD Dec 13 14:19:59.964000 audit[3706]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff81d61f90 a2=98 a3=ffffffff items=0 ppid=3577 pid=3706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:59.964000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:19:59.964000 audit: BPF prog-id=18 op=UNLOAD Dec 13 14:19:59.964000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.964000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.964000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.964000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.964000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.964000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.964000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.964000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.964000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.964000 audit: BPF prog-id=19 op=LOAD Dec 13 14:19:59.964000 audit[3706]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff81d61da0 a2=74 a3=540051 items=0 ppid=3577 pid=3706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:59.964000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:19:59.965000 audit: BPF prog-id=19 op=UNLOAD Dec 13 14:19:59.965000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.965000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.965000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.965000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.965000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.965000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.965000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.965000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.965000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.965000 audit: BPF prog-id=20 op=LOAD Dec 13 14:19:59.965000 audit[3706]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff81d61dd0 a2=94 a3=2 items=0 ppid=3577 pid=3706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:59.965000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:19:59.965000 audit: BPF prog-id=20 op=UNLOAD Dec 13 14:19:59.965000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.965000 audit[3706]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7fff81d61ca0 a2=28 a3=0 items=0 ppid=3577 pid=3706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:59.965000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:19:59.965000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.965000 audit[3706]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff81d61cd0 a2=28 a3=0 items=0 ppid=3577 pid=3706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:59.965000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:19:59.965000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.965000 audit[3706]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff81d61be0 a2=28 a3=0 items=0 ppid=3577 pid=3706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:59.965000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:19:59.965000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.965000 audit[3706]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7fff81d61cf0 a2=28 a3=0 items=0 ppid=3577 pid=3706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:59.965000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:19:59.965000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.965000 audit[3706]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7fff81d61cd0 a2=28 a3=0 items=0 ppid=3577 pid=3706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:59.965000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:19:59.965000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.965000 audit[3706]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7fff81d61cc0 a2=28 a3=0 items=0 ppid=3577 pid=3706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:59.965000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:19:59.965000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.965000 audit[3706]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7fff81d61cf0 a2=28 a3=0 items=0 ppid=3577 pid=3706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:59.965000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:19:59.965000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.965000 audit[3706]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff81d61cd0 a2=28 a3=0 items=0 ppid=3577 pid=3706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:59.965000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:19:59.965000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.965000 audit[3706]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff81d61cf0 a2=28 a3=0 items=0 ppid=3577 pid=3706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:59.965000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:19:59.965000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.965000 audit[3706]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff81d61cc0 a2=28 a3=0 items=0 ppid=3577 pid=3706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:59.965000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:19:59.965000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.965000 audit[3706]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7fff81d61d30 a2=28 a3=0 items=0 ppid=3577 pid=3706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:59.965000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:19:59.965000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.965000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.965000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.965000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.965000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.965000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.965000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.965000 audit[3706]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff81d61ba0 a2=40 a3=0 items=0 ppid=3577 pid=3706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:59.965000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:19:59.967000 audit: BPF prog-id=21 op=UNLOAD Dec 13 14:19:59.968000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.968000 audit[3706]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7fff81d61b90 a2=50 a3=2800 items=0 ppid=3577 pid=3706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:59.968000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:19:59.968000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.968000 audit[3706]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7fff81d61b90 a2=50 a3=2800 items=0 ppid=3577 pid=3706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:59.968000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:19:59.968000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.968000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.968000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.968000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.968000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.968000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.968000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.968000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.968000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.968000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.968000 audit: BPF prog-id=22 op=LOAD Dec 13 14:19:59.968000 audit[3706]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff81d613b0 a2=94 a3=2 items=0 ppid=3577 pid=3706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:59.968000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:19:59.969000 audit: BPF prog-id=22 op=UNLOAD Dec 13 14:19:59.969000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.969000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.969000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.969000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.969000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.969000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.969000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.969000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.969000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.969000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.969000 audit: BPF prog-id=23 op=LOAD Dec 13 14:19:59.969000 audit[3706]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff81d614b0 a2=94 a3=2d items=0 ppid=3577 pid=3706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:59.969000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:19:59.974000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.974000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.974000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.974000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.974000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.974000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.974000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.974000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.974000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.974000 audit: BPF prog-id=24 op=LOAD Dec 13 14:19:59.974000 audit[3713]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffec3ea060 a2=98 a3=0 items=0 ppid=3577 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:59.974000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:19:59.975000 audit: BPF prog-id=24 op=UNLOAD Dec 13 14:19:59.975000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.975000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.975000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.975000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.975000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.975000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.975000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.975000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.975000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.975000 audit: BPF prog-id=25 op=LOAD Dec 13 14:19:59.975000 audit[3713]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffec3e9e40 a2=74 a3=540051 items=0 ppid=3577 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:59.975000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:19:59.976000 audit: BPF prog-id=25 op=UNLOAD Dec 13 14:19:59.976000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.976000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.976000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.976000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.976000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.976000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.976000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.976000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.976000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:19:59.976000 audit: BPF prog-id=26 op=LOAD Dec 13 14:19:59.976000 audit[3713]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffec3e9e70 a2=94 a3=2 items=0 ppid=3577 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:19:59.976000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:19:59.976000 audit: BPF prog-id=26 op=UNLOAD Dec 13 14:20:00.087000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.087000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.087000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.087000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.087000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.087000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.087000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.087000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.087000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.087000 audit: BPF prog-id=27 op=LOAD Dec 13 14:20:00.087000 audit[3713]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffec3e9d30 a2=40 a3=1 items=0 ppid=3577 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:00.087000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:20:00.087000 audit: BPF prog-id=27 op=UNLOAD Dec 13 14:20:00.087000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.087000 audit[3713]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7fffec3e9e00 a2=50 a3=7fffec3e9ee0 items=0 ppid=3577 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:00.087000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:20:00.096000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.096000 audit[3713]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffec3e9d40 a2=28 a3=0 items=0 ppid=3577 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:00.096000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:20:00.096000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.096000 audit[3713]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffec3e9d70 a2=28 a3=0 items=0 ppid=3577 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:00.096000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:20:00.096000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.096000 audit[3713]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffec3e9c80 a2=28 a3=0 items=0 ppid=3577 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:00.096000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:20:00.096000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.096000 audit[3713]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffec3e9d90 a2=28 a3=0 items=0 ppid=3577 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:00.096000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:20:00.096000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.096000 audit[3713]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffec3e9d70 a2=28 a3=0 items=0 ppid=3577 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:00.096000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:20:00.096000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.096000 audit[3713]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffec3e9d60 a2=28 a3=0 items=0 ppid=3577 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:00.096000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:20:00.096000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.096000 audit[3713]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffec3e9d90 a2=28 a3=0 items=0 ppid=3577 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:00.096000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:20:00.096000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.096000 audit[3713]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffec3e9d70 a2=28 a3=0 items=0 ppid=3577 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:00.096000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:20:00.096000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.096000 audit[3713]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffec3e9d90 a2=28 a3=0 items=0 ppid=3577 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:00.096000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:20:00.096000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.096000 audit[3713]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffec3e9d60 a2=28 a3=0 items=0 ppid=3577 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:00.096000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:20:00.096000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.096000 audit[3713]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffec3e9dd0 a2=28 a3=0 items=0 ppid=3577 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:00.096000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:20:00.097000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.097000 audit[3713]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fffec3e9b80 a2=50 a3=1 items=0 ppid=3577 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:00.097000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:20:00.097000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.097000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.097000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.097000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.097000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.097000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.097000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.097000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.097000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.097000 audit: BPF prog-id=28 op=LOAD Dec 13 14:20:00.097000 audit[3713]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffec3e9b80 a2=94 a3=5 items=0 ppid=3577 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:00.097000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:20:00.097000 audit: BPF prog-id=28 op=UNLOAD Dec 13 14:20:00.097000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.097000 audit[3713]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fffec3e9c30 a2=50 a3=1 items=0 ppid=3577 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:00.097000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:20:00.097000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.097000 audit[3713]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7fffec3e9d50 a2=4 a3=38 items=0 ppid=3577 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:00.097000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:20:00.097000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.097000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.097000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.097000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.097000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.097000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.097000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.097000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.097000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.097000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.097000 audit[3713]: AVC avc: denied { confidentiality } for pid=3713 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 14:20:00.097000 audit[3713]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fffec3e9da0 a2=94 a3=6 items=0 ppid=3577 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:00.097000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:20:00.097000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.097000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.097000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.097000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.097000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.097000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.097000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.097000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.097000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.097000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.097000 audit[3713]: AVC avc: denied { confidentiality } for pid=3713 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 14:20:00.097000 audit[3713]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fffec3e9550 a2=94 a3=83 items=0 ppid=3577 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:00.097000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:20:00.097000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.097000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.097000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.097000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.097000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.097000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.097000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.097000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.097000 audit[3713]: AVC avc: denied { perfmon } for pid=3713 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.097000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.097000 audit[3713]: AVC avc: denied { confidentiality } for pid=3713 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 14:20:00.097000 audit[3713]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fffec3e9550 a2=94 a3=83 items=0 ppid=3577 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:00.097000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:20:00.098000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.098000 audit[3713]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fffec3eaf90 a2=10 a3=208 items=0 ppid=3577 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:00.098000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:20:00.098000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.098000 audit[3713]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fffec3eae30 a2=10 a3=3 items=0 ppid=3577 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:00.098000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:20:00.098000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.098000 audit[3713]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fffec3eadd0 a2=10 a3=3 items=0 ppid=3577 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:00.098000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:20:00.098000 audit[3713]: AVC avc: denied { bpf } for pid=3713 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:20:00.098000 audit[3713]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fffec3eadd0 a2=10 a3=7 items=0 ppid=3577 pid=3713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:00.098000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:20:00.104000 audit: BPF prog-id=23 op=UNLOAD Dec 13 14:20:00.150000 audit[3737]: NETFILTER_CFG table=nat:97 family=2 entries=15 op=nft_register_chain pid=3737 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:20:00.150000 audit[3737]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffea159f730 a2=0 a3=7ffea159f71c items=0 ppid=3577 pid=3737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:00.150000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:20:00.151000 audit[3739]: NETFILTER_CFG table=mangle:98 family=2 entries=16 op=nft_register_chain pid=3739 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:20:00.151000 audit[3739]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffc319d7620 a2=0 a3=7ffc319d760c items=0 ppid=3577 pid=3739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:00.151000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:20:00.155000 audit[3736]: NETFILTER_CFG table=raw:99 family=2 entries=21 op=nft_register_chain pid=3736 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:20:00.155000 audit[3736]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffe91253990 a2=0 a3=7ffe9125397c items=0 ppid=3577 pid=3736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:00.155000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:20:00.156000 audit[3738]: NETFILTER_CFG table=filter:100 family=2 entries=39 op=nft_register_chain pid=3738 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:20:00.156000 audit[3738]: SYSCALL arch=c000003e syscall=46 success=yes exit=18968 a0=3 a1=7ffd0775e4b0 a2=0 a3=7ffd0775e49c items=0 ppid=3577 pid=3738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:00.156000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:20:00.664791 systemd[1]: Started sshd@10-10.0.0.24:22-10.0.0.1:46362.service. Dec 13 14:20:00.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.24:22-10.0.0.1:46362 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:00.700000 audit[3747]: USER_ACCT pid=3747 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:00.702257 sshd[3747]: Accepted publickey for core from 10.0.0.1 port 46362 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg Dec 13 14:20:00.701000 audit[3747]: CRED_ACQ pid=3747 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:00.702000 audit[3747]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffd19e4e60 a2=3 a3=0 items=0 ppid=1 pid=3747 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:00.702000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:20:00.703463 sshd[3747]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:20:00.707349 systemd-logind[1295]: New session 11 of user core. Dec 13 14:20:00.708190 systemd[1]: Started session-11.scope. Dec 13 14:20:00.712000 audit[3747]: USER_START pid=3747 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:00.714000 audit[3750]: CRED_ACQ pid=3750 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:00.832826 sshd[3747]: pam_unix(sshd:session): session closed for user core Dec 13 14:20:00.833000 audit[3747]: USER_END pid=3747 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:00.833000 audit[3747]: CRED_DISP pid=3747 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:00.835712 systemd[1]: Started sshd@11-10.0.0.24:22-10.0.0.1:46378.service. Dec 13 14:20:00.834000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.24:22-10.0.0.1:46378 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:00.836629 systemd[1]: sshd@10-10.0.0.24:22-10.0.0.1:46362.service: Deactivated successfully. Dec 13 14:20:00.835000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.24:22-10.0.0.1:46362 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:00.837854 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 14:20:00.838325 systemd-logind[1295]: Session 11 logged out. Waiting for processes to exit. Dec 13 14:20:00.839487 systemd-logind[1295]: Removed session 11. Dec 13 14:20:00.867000 audit[3763]: USER_ACCT pid=3763 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:00.868560 sshd[3763]: Accepted publickey for core from 10.0.0.1 port 46378 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg Dec 13 14:20:00.868000 audit[3763]: CRED_ACQ pid=3763 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:00.868000 audit[3763]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff810d1d30 a2=3 a3=0 items=0 ppid=1 pid=3763 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:00.868000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:20:00.870111 sshd[3763]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:20:00.873971 systemd-logind[1295]: New session 12 of user core. Dec 13 14:20:00.875102 systemd[1]: Started session-12.scope. Dec 13 14:20:00.878000 audit[3763]: USER_START pid=3763 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:00.880000 audit[3768]: CRED_ACQ pid=3768 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:01.052728 sshd[3763]: pam_unix(sshd:session): session closed for user core Dec 13 14:20:01.053000 audit[3763]: USER_END pid=3763 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:01.054000 audit[3763]: CRED_DISP pid=3763 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:01.055416 systemd[1]: Started sshd@12-10.0.0.24:22-10.0.0.1:46392.service. Dec 13 14:20:01.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.24:22-10.0.0.1:46392 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:01.062663 systemd-logind[1295]: Session 12 logged out. Waiting for processes to exit. Dec 13 14:20:01.067000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.24:22-10.0.0.1:46378 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:01.068041 systemd[1]: sshd@11-10.0.0.24:22-10.0.0.1:46378.service: Deactivated successfully. Dec 13 14:20:01.069341 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 14:20:01.071932 systemd-logind[1295]: Removed session 12. Dec 13 14:20:01.098000 audit[3776]: USER_ACCT pid=3776 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:01.100156 sshd[3776]: Accepted publickey for core from 10.0.0.1 port 46392 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg Dec 13 14:20:01.099000 audit[3776]: CRED_ACQ pid=3776 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:01.099000 audit[3776]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd48f7c910 a2=3 a3=0 items=0 ppid=1 pid=3776 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:01.099000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:20:01.101444 sshd[3776]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:20:01.105516 systemd-logind[1295]: New session 13 of user core. Dec 13 14:20:01.106406 systemd[1]: Started session-13.scope. Dec 13 14:20:01.109000 audit[3776]: USER_START pid=3776 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:01.111000 audit[3781]: CRED_ACQ pid=3781 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:01.228543 sshd[3776]: pam_unix(sshd:session): session closed for user core Dec 13 14:20:01.229000 audit[3776]: USER_END pid=3776 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:01.229000 audit[3776]: CRED_DISP pid=3776 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:01.232166 systemd[1]: sshd@12-10.0.0.24:22-10.0.0.1:46392.service: Deactivated successfully. Dec 13 14:20:01.231000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.24:22-10.0.0.1:46392 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:01.233062 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 14:20:01.233827 systemd-logind[1295]: Session 13 logged out. Waiting for processes to exit. Dec 13 14:20:01.235262 systemd-logind[1295]: Removed session 13. Dec 13 14:20:01.247247 systemd-networkd[1087]: vxlan.calico: Gained IPv6LL Dec 13 14:20:02.272133 env[1311]: time="2024-12-13T14:20:02.272079239Z" level=info msg="StopPodSandbox for \"8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3\"" Dec 13 14:20:02.272609 env[1311]: time="2024-12-13T14:20:02.272079239Z" level=info msg="StopPodSandbox for \"7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91\"" Dec 13 14:20:02.318481 kubelet[2218]: I1213 14:20:02.317568 2218 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-xtcr5" podStartSLOduration=7.328752503 podStartE2EDuration="28.317512354s" podCreationTimestamp="2024-12-13 14:19:34 +0000 UTC" firstStartedPulling="2024-12-13 14:19:35.155311309 +0000 UTC m=+21.978369222" lastFinishedPulling="2024-12-13 14:19:56.14407116 +0000 UTC m=+42.967129073" observedRunningTime="2024-12-13 14:19:57.859215435 +0000 UTC m=+44.682273368" watchObservedRunningTime="2024-12-13 14:20:02.317512354 +0000 UTC m=+49.140570267" Dec 13 14:20:02.383355 env[1311]: 2024-12-13 14:20:02.317 [INFO][3828] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3" Dec 13 14:20:02.383355 env[1311]: 2024-12-13 14:20:02.318 [INFO][3828] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3" iface="eth0" netns="/var/run/netns/cni-da284d04-7e07-529f-eef6-1a609df980bd" Dec 13 14:20:02.383355 env[1311]: 2024-12-13 14:20:02.318 [INFO][3828] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3" iface="eth0" netns="/var/run/netns/cni-da284d04-7e07-529f-eef6-1a609df980bd" Dec 13 14:20:02.383355 env[1311]: 2024-12-13 14:20:02.319 [INFO][3828] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3" iface="eth0" netns="/var/run/netns/cni-da284d04-7e07-529f-eef6-1a609df980bd" Dec 13 14:20:02.383355 env[1311]: 2024-12-13 14:20:02.319 [INFO][3828] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3" Dec 13 14:20:02.383355 env[1311]: 2024-12-13 14:20:02.319 [INFO][3828] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3" Dec 13 14:20:02.383355 env[1311]: 2024-12-13 14:20:02.367 [INFO][3845] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3" HandleID="k8s-pod-network.8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3" Workload="localhost-k8s-calico--apiserver--67dd9c689b--nvrx4-eth0" Dec 13 14:20:02.383355 env[1311]: 2024-12-13 14:20:02.368 [INFO][3845] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:20:02.383355 env[1311]: 2024-12-13 14:20:02.368 [INFO][3845] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:20:02.383355 env[1311]: 2024-12-13 14:20:02.377 [WARNING][3845] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3" HandleID="k8s-pod-network.8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3" Workload="localhost-k8s-calico--apiserver--67dd9c689b--nvrx4-eth0" Dec 13 14:20:02.383355 env[1311]: 2024-12-13 14:20:02.378 [INFO][3845] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3" HandleID="k8s-pod-network.8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3" Workload="localhost-k8s-calico--apiserver--67dd9c689b--nvrx4-eth0" Dec 13 14:20:02.383355 env[1311]: 2024-12-13 14:20:02.379 [INFO][3845] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:20:02.383355 env[1311]: 2024-12-13 14:20:02.381 [INFO][3828] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3" Dec 13 14:20:02.384032 env[1311]: time="2024-12-13T14:20:02.383488531Z" level=info msg="TearDown network for sandbox \"8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3\" successfully" Dec 13 14:20:02.384032 env[1311]: time="2024-12-13T14:20:02.383522966Z" level=info msg="StopPodSandbox for \"8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3\" returns successfully" Dec 13 14:20:02.384727 env[1311]: time="2024-12-13T14:20:02.384673906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67dd9c689b-nvrx4,Uid:237a3d9e-0254-4057-bf82-74077775e376,Namespace:calico-apiserver,Attempt:1,}" Dec 13 14:20:02.386613 systemd[1]: run-netns-cni\x2dda284d04\x2d7e07\x2d529f\x2deef6\x2d1a609df980bd.mount: Deactivated successfully. Dec 13 14:20:02.392177 env[1311]: 2024-12-13 14:20:02.321 [INFO][3829] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91" Dec 13 14:20:02.392177 env[1311]: 2024-12-13 14:20:02.321 [INFO][3829] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91" iface="eth0" netns="/var/run/netns/cni-73860888-4bbf-255d-1908-ed1b345599ed" Dec 13 14:20:02.392177 env[1311]: 2024-12-13 14:20:02.322 [INFO][3829] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91" iface="eth0" netns="/var/run/netns/cni-73860888-4bbf-255d-1908-ed1b345599ed" Dec 13 14:20:02.392177 env[1311]: 2024-12-13 14:20:02.322 [INFO][3829] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91" iface="eth0" netns="/var/run/netns/cni-73860888-4bbf-255d-1908-ed1b345599ed" Dec 13 14:20:02.392177 env[1311]: 2024-12-13 14:20:02.322 [INFO][3829] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91" Dec 13 14:20:02.392177 env[1311]: 2024-12-13 14:20:02.322 [INFO][3829] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91" Dec 13 14:20:02.392177 env[1311]: 2024-12-13 14:20:02.371 [INFO][3846] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91" HandleID="k8s-pod-network.7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91" Workload="localhost-k8s-coredns--76f75df574--77dkv-eth0" Dec 13 14:20:02.392177 env[1311]: 2024-12-13 14:20:02.372 [INFO][3846] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:20:02.392177 env[1311]: 2024-12-13 14:20:02.379 [INFO][3846] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:20:02.392177 env[1311]: 2024-12-13 14:20:02.386 [WARNING][3846] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91" HandleID="k8s-pod-network.7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91" Workload="localhost-k8s-coredns--76f75df574--77dkv-eth0" Dec 13 14:20:02.392177 env[1311]: 2024-12-13 14:20:02.386 [INFO][3846] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91" HandleID="k8s-pod-network.7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91" Workload="localhost-k8s-coredns--76f75df574--77dkv-eth0" Dec 13 14:20:02.392177 env[1311]: 2024-12-13 14:20:02.388 [INFO][3846] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:20:02.392177 env[1311]: 2024-12-13 14:20:02.390 [INFO][3829] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91" Dec 13 14:20:02.392622 env[1311]: time="2024-12-13T14:20:02.392356798Z" level=info msg="TearDown network for sandbox \"7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91\" successfully" Dec 13 14:20:02.392622 env[1311]: time="2024-12-13T14:20:02.392399829Z" level=info msg="StopPodSandbox for \"7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91\" returns successfully" Dec 13 14:20:02.392996 kubelet[2218]: E1213 14:20:02.392970 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:02.394353 env[1311]: time="2024-12-13T14:20:02.394310756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-77dkv,Uid:727d322d-22ee-4e52-af18-8f9ebc3141c1,Namespace:kube-system,Attempt:1,}" Dec 13 14:20:02.395271 systemd[1]: run-netns-cni\x2d73860888\x2d4bbf\x2d255d\x2d1908\x2ded1b345599ed.mount: Deactivated successfully. Dec 13 14:20:02.716938 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:20:02.717119 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali0f0c75667ae: link becomes ready Dec 13 14:20:02.713670 systemd-networkd[1087]: cali0f0c75667ae: Link UP Dec 13 14:20:02.718937 systemd-networkd[1087]: cali0f0c75667ae: Gained carrier Dec 13 14:20:02.726954 env[1311]: 2024-12-13 14:20:02.628 [INFO][3860] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--67dd9c689b--nvrx4-eth0 calico-apiserver-67dd9c689b- calico-apiserver 237a3d9e-0254-4057-bf82-74077775e376 934 0 2024-12-13 14:19:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67dd9c689b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-67dd9c689b-nvrx4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0f0c75667ae [] []}} ContainerID="ebbb2bf361d79d220cb412bfedae470e8bcec2f90873fe7f7cf03bd47ebfb52d" Namespace="calico-apiserver" Pod="calico-apiserver-67dd9c689b-nvrx4" WorkloadEndpoint="localhost-k8s-calico--apiserver--67dd9c689b--nvrx4-" Dec 13 14:20:02.726954 env[1311]: 2024-12-13 14:20:02.628 [INFO][3860] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ebbb2bf361d79d220cb412bfedae470e8bcec2f90873fe7f7cf03bd47ebfb52d" Namespace="calico-apiserver" Pod="calico-apiserver-67dd9c689b-nvrx4" WorkloadEndpoint="localhost-k8s-calico--apiserver--67dd9c689b--nvrx4-eth0" Dec 13 14:20:02.726954 env[1311]: 2024-12-13 14:20:02.668 [INFO][3891] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ebbb2bf361d79d220cb412bfedae470e8bcec2f90873fe7f7cf03bd47ebfb52d" HandleID="k8s-pod-network.ebbb2bf361d79d220cb412bfedae470e8bcec2f90873fe7f7cf03bd47ebfb52d" Workload="localhost-k8s-calico--apiserver--67dd9c689b--nvrx4-eth0" Dec 13 14:20:02.726954 env[1311]: 2024-12-13 14:20:02.679 [INFO][3891] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ebbb2bf361d79d220cb412bfedae470e8bcec2f90873fe7f7cf03bd47ebfb52d" HandleID="k8s-pod-network.ebbb2bf361d79d220cb412bfedae470e8bcec2f90873fe7f7cf03bd47ebfb52d" Workload="localhost-k8s-calico--apiserver--67dd9c689b--nvrx4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003754e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-67dd9c689b-nvrx4", "timestamp":"2024-12-13 14:20:02.668535528 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 14:20:02.726954 env[1311]: 2024-12-13 14:20:02.679 [INFO][3891] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:20:02.726954 env[1311]: 2024-12-13 14:20:02.679 [INFO][3891] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:20:02.726954 env[1311]: 2024-12-13 14:20:02.679 [INFO][3891] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 14:20:02.726954 env[1311]: 2024-12-13 14:20:02.681 [INFO][3891] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ebbb2bf361d79d220cb412bfedae470e8bcec2f90873fe7f7cf03bd47ebfb52d" host="localhost" Dec 13 14:20:02.726954 env[1311]: 2024-12-13 14:20:02.686 [INFO][3891] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 14:20:02.726954 env[1311]: 2024-12-13 14:20:02.690 [INFO][3891] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 14:20:02.726954 env[1311]: 2024-12-13 14:20:02.693 [INFO][3891] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 14:20:02.726954 env[1311]: 2024-12-13 14:20:02.695 [INFO][3891] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 14:20:02.726954 env[1311]: 2024-12-13 14:20:02.695 [INFO][3891] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ebbb2bf361d79d220cb412bfedae470e8bcec2f90873fe7f7cf03bd47ebfb52d" host="localhost" Dec 13 14:20:02.726954 env[1311]: 2024-12-13 14:20:02.696 [INFO][3891] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ebbb2bf361d79d220cb412bfedae470e8bcec2f90873fe7f7cf03bd47ebfb52d Dec 13 14:20:02.726954 env[1311]: 2024-12-13 14:20:02.700 [INFO][3891] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ebbb2bf361d79d220cb412bfedae470e8bcec2f90873fe7f7cf03bd47ebfb52d" host="localhost" Dec 13 14:20:02.726954 env[1311]: 2024-12-13 14:20:02.706 [INFO][3891] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.ebbb2bf361d79d220cb412bfedae470e8bcec2f90873fe7f7cf03bd47ebfb52d" host="localhost" Dec 13 14:20:02.726954 env[1311]: 2024-12-13 14:20:02.706 [INFO][3891] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.ebbb2bf361d79d220cb412bfedae470e8bcec2f90873fe7f7cf03bd47ebfb52d" host="localhost" Dec 13 14:20:02.726954 env[1311]: 2024-12-13 14:20:02.706 [INFO][3891] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:20:02.726954 env[1311]: 2024-12-13 14:20:02.706 [INFO][3891] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="ebbb2bf361d79d220cb412bfedae470e8bcec2f90873fe7f7cf03bd47ebfb52d" HandleID="k8s-pod-network.ebbb2bf361d79d220cb412bfedae470e8bcec2f90873fe7f7cf03bd47ebfb52d" Workload="localhost-k8s-calico--apiserver--67dd9c689b--nvrx4-eth0" Dec 13 14:20:02.728427 env[1311]: 2024-12-13 14:20:02.709 [INFO][3860] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ebbb2bf361d79d220cb412bfedae470e8bcec2f90873fe7f7cf03bd47ebfb52d" Namespace="calico-apiserver" Pod="calico-apiserver-67dd9c689b-nvrx4" WorkloadEndpoint="localhost-k8s-calico--apiserver--67dd9c689b--nvrx4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67dd9c689b--nvrx4-eth0", GenerateName:"calico-apiserver-67dd9c689b-", Namespace:"calico-apiserver", SelfLink:"", UID:"237a3d9e-0254-4057-bf82-74077775e376", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 19, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67dd9c689b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-67dd9c689b-nvrx4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0f0c75667ae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:20:02.728427 env[1311]: 2024-12-13 14:20:02.709 [INFO][3860] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="ebbb2bf361d79d220cb412bfedae470e8bcec2f90873fe7f7cf03bd47ebfb52d" Namespace="calico-apiserver" Pod="calico-apiserver-67dd9c689b-nvrx4" WorkloadEndpoint="localhost-k8s-calico--apiserver--67dd9c689b--nvrx4-eth0" Dec 13 14:20:02.728427 env[1311]: 2024-12-13 14:20:02.709 [INFO][3860] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0f0c75667ae ContainerID="ebbb2bf361d79d220cb412bfedae470e8bcec2f90873fe7f7cf03bd47ebfb52d" Namespace="calico-apiserver" Pod="calico-apiserver-67dd9c689b-nvrx4" WorkloadEndpoint="localhost-k8s-calico--apiserver--67dd9c689b--nvrx4-eth0" Dec 13 14:20:02.728427 env[1311]: 2024-12-13 14:20:02.716 [INFO][3860] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ebbb2bf361d79d220cb412bfedae470e8bcec2f90873fe7f7cf03bd47ebfb52d" Namespace="calico-apiserver" Pod="calico-apiserver-67dd9c689b-nvrx4" WorkloadEndpoint="localhost-k8s-calico--apiserver--67dd9c689b--nvrx4-eth0" Dec 13 14:20:02.728427 env[1311]: 2024-12-13 14:20:02.716 [INFO][3860] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ebbb2bf361d79d220cb412bfedae470e8bcec2f90873fe7f7cf03bd47ebfb52d" Namespace="calico-apiserver" Pod="calico-apiserver-67dd9c689b-nvrx4" WorkloadEndpoint="localhost-k8s-calico--apiserver--67dd9c689b--nvrx4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67dd9c689b--nvrx4-eth0", GenerateName:"calico-apiserver-67dd9c689b-", Namespace:"calico-apiserver", SelfLink:"", UID:"237a3d9e-0254-4057-bf82-74077775e376", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 19, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67dd9c689b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ebbb2bf361d79d220cb412bfedae470e8bcec2f90873fe7f7cf03bd47ebfb52d", Pod:"calico-apiserver-67dd9c689b-nvrx4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0f0c75667ae", MAC:"3a:c4:09:8b:b9:8e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:20:02.728427 env[1311]: 2024-12-13 14:20:02.724 [INFO][3860] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ebbb2bf361d79d220cb412bfedae470e8bcec2f90873fe7f7cf03bd47ebfb52d" Namespace="calico-apiserver" Pod="calico-apiserver-67dd9c689b-nvrx4" WorkloadEndpoint="localhost-k8s-calico--apiserver--67dd9c689b--nvrx4-eth0" Dec 13 14:20:02.744000 audit[3931]: NETFILTER_CFG table=filter:101 family=2 entries=40 op=nft_register_chain pid=3931 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:20:02.745782 env[1311]: time="2024-12-13T14:20:02.743525706Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:20:02.745782 env[1311]: time="2024-12-13T14:20:02.743579517Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:20:02.745782 env[1311]: time="2024-12-13T14:20:02.743593493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:20:02.745782 env[1311]: time="2024-12-13T14:20:02.743774704Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ebbb2bf361d79d220cb412bfedae470e8bcec2f90873fe7f7cf03bd47ebfb52d pid=3925 runtime=io.containerd.runc.v2 Dec 13 14:20:02.744000 audit[3931]: SYSCALL arch=c000003e syscall=46 success=yes exit=23492 a0=3 a1=7ffdc8386c80 a2=0 a3=7ffdc8386c6c items=0 ppid=3577 pid=3931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:02.744000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:20:02.747514 systemd-networkd[1087]: calib107a93b52c: Link UP Dec 13 14:20:02.747715 systemd-networkd[1087]: calib107a93b52c: Gained carrier Dec 13 14:20:02.748071 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calib107a93b52c: link becomes ready Dec 13 14:20:02.762464 env[1311]: 2024-12-13 14:20:02.628 [INFO][3871] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--77dkv-eth0 coredns-76f75df574- kube-system 727d322d-22ee-4e52-af18-8f9ebc3141c1 935 0 2024-12-13 14:19:26 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-77dkv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib107a93b52c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="6a774f300c2b6ffd365d56c4066fa64070176f6544f31088e430cfcdadc08df4" Namespace="kube-system" Pod="coredns-76f75df574-77dkv" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--77dkv-" Dec 13 14:20:02.762464 env[1311]: 2024-12-13 14:20:02.628 [INFO][3871] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6a774f300c2b6ffd365d56c4066fa64070176f6544f31088e430cfcdadc08df4" Namespace="kube-system" Pod="coredns-76f75df574-77dkv" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--77dkv-eth0" Dec 13 14:20:02.762464 env[1311]: 2024-12-13 14:20:02.669 [INFO][3898] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6a774f300c2b6ffd365d56c4066fa64070176f6544f31088e430cfcdadc08df4" HandleID="k8s-pod-network.6a774f300c2b6ffd365d56c4066fa64070176f6544f31088e430cfcdadc08df4" Workload="localhost-k8s-coredns--76f75df574--77dkv-eth0" Dec 13 14:20:02.762464 env[1311]: 2024-12-13 14:20:02.679 [INFO][3898] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6a774f300c2b6ffd365d56c4066fa64070176f6544f31088e430cfcdadc08df4" HandleID="k8s-pod-network.6a774f300c2b6ffd365d56c4066fa64070176f6544f31088e430cfcdadc08df4" Workload="localhost-k8s-coredns--76f75df574--77dkv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dd5b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-77dkv", "timestamp":"2024-12-13 14:20:02.669482246 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 14:20:02.762464 env[1311]: 2024-12-13 14:20:02.679 [INFO][3898] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:20:02.762464 env[1311]: 2024-12-13 14:20:02.706 [INFO][3898] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:20:02.762464 env[1311]: 2024-12-13 14:20:02.706 [INFO][3898] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 14:20:02.762464 env[1311]: 2024-12-13 14:20:02.708 [INFO][3898] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6a774f300c2b6ffd365d56c4066fa64070176f6544f31088e430cfcdadc08df4" host="localhost" Dec 13 14:20:02.762464 env[1311]: 2024-12-13 14:20:02.713 [INFO][3898] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 14:20:02.762464 env[1311]: 2024-12-13 14:20:02.720 [INFO][3898] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 14:20:02.762464 env[1311]: 2024-12-13 14:20:02.723 [INFO][3898] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 14:20:02.762464 env[1311]: 2024-12-13 14:20:02.726 [INFO][3898] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 14:20:02.762464 env[1311]: 2024-12-13 14:20:02.726 [INFO][3898] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6a774f300c2b6ffd365d56c4066fa64070176f6544f31088e430cfcdadc08df4" host="localhost" Dec 13 14:20:02.762464 env[1311]: 2024-12-13 14:20:02.729 [INFO][3898] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6a774f300c2b6ffd365d56c4066fa64070176f6544f31088e430cfcdadc08df4 Dec 13 14:20:02.762464 env[1311]: 2024-12-13 14:20:02.734 [INFO][3898] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6a774f300c2b6ffd365d56c4066fa64070176f6544f31088e430cfcdadc08df4" host="localhost" Dec 13 14:20:02.762464 env[1311]: 2024-12-13 14:20:02.741 [INFO][3898] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.6a774f300c2b6ffd365d56c4066fa64070176f6544f31088e430cfcdadc08df4" host="localhost" Dec 13 14:20:02.762464 env[1311]: 2024-12-13 14:20:02.741 [INFO][3898] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.6a774f300c2b6ffd365d56c4066fa64070176f6544f31088e430cfcdadc08df4" host="localhost" Dec 13 14:20:02.762464 env[1311]: 2024-12-13 14:20:02.741 [INFO][3898] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:20:02.762464 env[1311]: 2024-12-13 14:20:02.741 [INFO][3898] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="6a774f300c2b6ffd365d56c4066fa64070176f6544f31088e430cfcdadc08df4" HandleID="k8s-pod-network.6a774f300c2b6ffd365d56c4066fa64070176f6544f31088e430cfcdadc08df4" Workload="localhost-k8s-coredns--76f75df574--77dkv-eth0" Dec 13 14:20:02.763227 env[1311]: 2024-12-13 14:20:02.744 [INFO][3871] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6a774f300c2b6ffd365d56c4066fa64070176f6544f31088e430cfcdadc08df4" Namespace="kube-system" Pod="coredns-76f75df574-77dkv" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--77dkv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--77dkv-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"727d322d-22ee-4e52-af18-8f9ebc3141c1", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 19, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-77dkv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib107a93b52c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:20:02.763227 env[1311]: 2024-12-13 14:20:02.744 [INFO][3871] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="6a774f300c2b6ffd365d56c4066fa64070176f6544f31088e430cfcdadc08df4" Namespace="kube-system" Pod="coredns-76f75df574-77dkv" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--77dkv-eth0" Dec 13 14:20:02.763227 env[1311]: 2024-12-13 14:20:02.744 [INFO][3871] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib107a93b52c ContainerID="6a774f300c2b6ffd365d56c4066fa64070176f6544f31088e430cfcdadc08df4" Namespace="kube-system" Pod="coredns-76f75df574-77dkv" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--77dkv-eth0" Dec 13 14:20:02.763227 env[1311]: 2024-12-13 14:20:02.747 [INFO][3871] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6a774f300c2b6ffd365d56c4066fa64070176f6544f31088e430cfcdadc08df4" Namespace="kube-system" Pod="coredns-76f75df574-77dkv" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--77dkv-eth0" Dec 13 14:20:02.763227 env[1311]: 2024-12-13 14:20:02.748 [INFO][3871] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6a774f300c2b6ffd365d56c4066fa64070176f6544f31088e430cfcdadc08df4" Namespace="kube-system" Pod="coredns-76f75df574-77dkv" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--77dkv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--77dkv-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"727d322d-22ee-4e52-af18-8f9ebc3141c1", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 19, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6a774f300c2b6ffd365d56c4066fa64070176f6544f31088e430cfcdadc08df4", Pod:"coredns-76f75df574-77dkv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib107a93b52c", MAC:"a6:68:6f:04:05:ed", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:20:02.763227 env[1311]: 2024-12-13 14:20:02.759 [INFO][3871] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6a774f300c2b6ffd365d56c4066fa64070176f6544f31088e430cfcdadc08df4" Namespace="kube-system" Pod="coredns-76f75df574-77dkv" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--77dkv-eth0" Dec 13 14:20:02.773000 audit[3963]: NETFILTER_CFG table=filter:102 family=2 entries=38 op=nft_register_chain pid=3963 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:20:02.773000 audit[3963]: SYSCALL arch=c000003e syscall=46 success=yes exit=20336 a0=3 a1=7ffefa162970 a2=0 a3=7ffefa16295c items=0 ppid=3577 pid=3963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:02.773000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:20:02.779469 systemd-resolved[1229]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 14:20:02.784955 env[1311]: time="2024-12-13T14:20:02.784774897Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:20:02.784955 env[1311]: time="2024-12-13T14:20:02.784816986Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:20:02.784955 env[1311]: time="2024-12-13T14:20:02.784829880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:20:02.785141 env[1311]: time="2024-12-13T14:20:02.785088747Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6a774f300c2b6ffd365d56c4066fa64070176f6544f31088e430cfcdadc08df4 pid=3976 runtime=io.containerd.runc.v2 Dec 13 14:20:02.804681 env[1311]: time="2024-12-13T14:20:02.804607126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67dd9c689b-nvrx4,Uid:237a3d9e-0254-4057-bf82-74077775e376,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"ebbb2bf361d79d220cb412bfedae470e8bcec2f90873fe7f7cf03bd47ebfb52d\"" Dec 13 14:20:02.806496 env[1311]: time="2024-12-13T14:20:02.806476785Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 14:20:02.812628 systemd-resolved[1229]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 14:20:02.837569 env[1311]: time="2024-12-13T14:20:02.837516742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-77dkv,Uid:727d322d-22ee-4e52-af18-8f9ebc3141c1,Namespace:kube-system,Attempt:1,} returns sandbox id \"6a774f300c2b6ffd365d56c4066fa64070176f6544f31088e430cfcdadc08df4\"" Dec 13 14:20:02.838655 kubelet[2218]: E1213 14:20:02.838630 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:02.843692 env[1311]: time="2024-12-13T14:20:02.843611212Z" level=info msg="CreateContainer within sandbox \"6a774f300c2b6ffd365d56c4066fa64070176f6544f31088e430cfcdadc08df4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:20:02.864332 env[1311]: time="2024-12-13T14:20:02.864272927Z" level=info msg="CreateContainer within sandbox \"6a774f300c2b6ffd365d56c4066fa64070176f6544f31088e430cfcdadc08df4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e74a60cda99538672753a8400d3ab124a0e8f13487379e914f9cbc67e4795952\"" Dec 13 14:20:02.864902 env[1311]: time="2024-12-13T14:20:02.864868937Z" level=info msg="StartContainer for \"e74a60cda99538672753a8400d3ab124a0e8f13487379e914f9cbc67e4795952\"" Dec 13 14:20:02.919894 env[1311]: time="2024-12-13T14:20:02.919824647Z" level=info msg="StartContainer for \"e74a60cda99538672753a8400d3ab124a0e8f13487379e914f9cbc67e4795952\" returns successfully" Dec 13 14:20:03.272718 env[1311]: time="2024-12-13T14:20:03.272663519Z" level=info msg="StopPodSandbox for \"8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e\"" Dec 13 14:20:03.355057 env[1311]: 2024-12-13 14:20:03.320 [INFO][4069] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e" Dec 13 14:20:03.355057 env[1311]: 2024-12-13 14:20:03.320 [INFO][4069] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e" iface="eth0" netns="/var/run/netns/cni-3a473844-462b-e039-0f42-48fd6cd01efa" Dec 13 14:20:03.355057 env[1311]: 2024-12-13 14:20:03.320 [INFO][4069] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e" iface="eth0" netns="/var/run/netns/cni-3a473844-462b-e039-0f42-48fd6cd01efa" Dec 13 14:20:03.355057 env[1311]: 2024-12-13 14:20:03.320 [INFO][4069] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e" iface="eth0" netns="/var/run/netns/cni-3a473844-462b-e039-0f42-48fd6cd01efa" Dec 13 14:20:03.355057 env[1311]: 2024-12-13 14:20:03.321 [INFO][4069] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e" Dec 13 14:20:03.355057 env[1311]: 2024-12-13 14:20:03.321 [INFO][4069] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e" Dec 13 14:20:03.355057 env[1311]: 2024-12-13 14:20:03.344 [INFO][4077] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e" HandleID="k8s-pod-network.8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e" Workload="localhost-k8s-calico--apiserver--67dd9c689b--4dc82-eth0" Dec 13 14:20:03.355057 env[1311]: 2024-12-13 14:20:03.344 [INFO][4077] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:20:03.355057 env[1311]: 2024-12-13 14:20:03.344 [INFO][4077] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:20:03.355057 env[1311]: 2024-12-13 14:20:03.350 [WARNING][4077] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e" HandleID="k8s-pod-network.8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e" Workload="localhost-k8s-calico--apiserver--67dd9c689b--4dc82-eth0" Dec 13 14:20:03.355057 env[1311]: 2024-12-13 14:20:03.350 [INFO][4077] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e" HandleID="k8s-pod-network.8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e" Workload="localhost-k8s-calico--apiserver--67dd9c689b--4dc82-eth0" Dec 13 14:20:03.355057 env[1311]: 2024-12-13 14:20:03.351 [INFO][4077] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:20:03.355057 env[1311]: 2024-12-13 14:20:03.353 [INFO][4069] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e" Dec 13 14:20:03.355878 env[1311]: time="2024-12-13T14:20:03.355366416Z" level=info msg="TearDown network for sandbox \"8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e\" successfully" Dec 13 14:20:03.355878 env[1311]: time="2024-12-13T14:20:03.355431829Z" level=info msg="StopPodSandbox for \"8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e\" returns successfully" Dec 13 14:20:03.356231 env[1311]: time="2024-12-13T14:20:03.356197716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67dd9c689b-4dc82,Uid:3cc34ed8-34cd-4a26-9a00-bd15703328eb,Namespace:calico-apiserver,Attempt:1,}" Dec 13 14:20:03.390122 systemd[1]: run-netns-cni\x2d3a473844\x2d462b\x2de039\x2d0f42\x2d48fd6cd01efa.mount: Deactivated successfully. Dec 13 14:20:03.494461 systemd-networkd[1087]: calib9bd5f6f228: Link UP Dec 13 14:20:03.496294 systemd-networkd[1087]: calib9bd5f6f228: Gained carrier Dec 13 14:20:03.497077 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calib9bd5f6f228: link becomes ready Dec 13 14:20:03.509788 env[1311]: 2024-12-13 14:20:03.417 [INFO][4085] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--67dd9c689b--4dc82-eth0 calico-apiserver-67dd9c689b- calico-apiserver 3cc34ed8-34cd-4a26-9a00-bd15703328eb 953 0 2024-12-13 14:19:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67dd9c689b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-67dd9c689b-4dc82 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib9bd5f6f228 [] []}} ContainerID="2f1c69fd6c3022ecab959cd7fcfdc7195f90f0d55961886352f84b89ccbda8bf" Namespace="calico-apiserver" Pod="calico-apiserver-67dd9c689b-4dc82" WorkloadEndpoint="localhost-k8s-calico--apiserver--67dd9c689b--4dc82-" Dec 13 14:20:03.509788 env[1311]: 2024-12-13 14:20:03.417 [INFO][4085] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2f1c69fd6c3022ecab959cd7fcfdc7195f90f0d55961886352f84b89ccbda8bf" Namespace="calico-apiserver" Pod="calico-apiserver-67dd9c689b-4dc82" WorkloadEndpoint="localhost-k8s-calico--apiserver--67dd9c689b--4dc82-eth0" Dec 13 14:20:03.509788 env[1311]: 2024-12-13 14:20:03.448 [INFO][4098] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2f1c69fd6c3022ecab959cd7fcfdc7195f90f0d55961886352f84b89ccbda8bf" HandleID="k8s-pod-network.2f1c69fd6c3022ecab959cd7fcfdc7195f90f0d55961886352f84b89ccbda8bf" Workload="localhost-k8s-calico--apiserver--67dd9c689b--4dc82-eth0" Dec 13 14:20:03.509788 env[1311]: 2024-12-13 14:20:03.458 [INFO][4098] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2f1c69fd6c3022ecab959cd7fcfdc7195f90f0d55961886352f84b89ccbda8bf" HandleID="k8s-pod-network.2f1c69fd6c3022ecab959cd7fcfdc7195f90f0d55961886352f84b89ccbda8bf" Workload="localhost-k8s-calico--apiserver--67dd9c689b--4dc82-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00019ca50), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-67dd9c689b-4dc82", "timestamp":"2024-12-13 14:20:03.448497517 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 14:20:03.509788 env[1311]: 2024-12-13 14:20:03.458 [INFO][4098] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:20:03.509788 env[1311]: 2024-12-13 14:20:03.458 [INFO][4098] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:20:03.509788 env[1311]: 2024-12-13 14:20:03.458 [INFO][4098] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 14:20:03.509788 env[1311]: 2024-12-13 14:20:03.460 [INFO][4098] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2f1c69fd6c3022ecab959cd7fcfdc7195f90f0d55961886352f84b89ccbda8bf" host="localhost" Dec 13 14:20:03.509788 env[1311]: 2024-12-13 14:20:03.465 [INFO][4098] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 14:20:03.509788 env[1311]: 2024-12-13 14:20:03.469 [INFO][4098] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 14:20:03.509788 env[1311]: 2024-12-13 14:20:03.471 [INFO][4098] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 14:20:03.509788 env[1311]: 2024-12-13 14:20:03.474 [INFO][4098] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 14:20:03.509788 env[1311]: 2024-12-13 14:20:03.474 [INFO][4098] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2f1c69fd6c3022ecab959cd7fcfdc7195f90f0d55961886352f84b89ccbda8bf" host="localhost" Dec 13 14:20:03.509788 env[1311]: 2024-12-13 14:20:03.476 [INFO][4098] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2f1c69fd6c3022ecab959cd7fcfdc7195f90f0d55961886352f84b89ccbda8bf Dec 13 14:20:03.509788 env[1311]: 2024-12-13 14:20:03.482 [INFO][4098] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2f1c69fd6c3022ecab959cd7fcfdc7195f90f0d55961886352f84b89ccbda8bf" host="localhost" Dec 13 14:20:03.509788 env[1311]: 2024-12-13 14:20:03.489 [INFO][4098] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.2f1c69fd6c3022ecab959cd7fcfdc7195f90f0d55961886352f84b89ccbda8bf" host="localhost" Dec 13 14:20:03.509788 env[1311]: 2024-12-13 14:20:03.489 [INFO][4098] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.2f1c69fd6c3022ecab959cd7fcfdc7195f90f0d55961886352f84b89ccbda8bf" host="localhost" Dec 13 14:20:03.509788 env[1311]: 2024-12-13 14:20:03.490 [INFO][4098] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:20:03.509788 env[1311]: 2024-12-13 14:20:03.490 [INFO][4098] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="2f1c69fd6c3022ecab959cd7fcfdc7195f90f0d55961886352f84b89ccbda8bf" HandleID="k8s-pod-network.2f1c69fd6c3022ecab959cd7fcfdc7195f90f0d55961886352f84b89ccbda8bf" Workload="localhost-k8s-calico--apiserver--67dd9c689b--4dc82-eth0" Dec 13 14:20:03.510569 env[1311]: 2024-12-13 14:20:03.492 [INFO][4085] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2f1c69fd6c3022ecab959cd7fcfdc7195f90f0d55961886352f84b89ccbda8bf" Namespace="calico-apiserver" Pod="calico-apiserver-67dd9c689b-4dc82" WorkloadEndpoint="localhost-k8s-calico--apiserver--67dd9c689b--4dc82-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67dd9c689b--4dc82-eth0", GenerateName:"calico-apiserver-67dd9c689b-", Namespace:"calico-apiserver", SelfLink:"", UID:"3cc34ed8-34cd-4a26-9a00-bd15703328eb", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 19, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67dd9c689b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-67dd9c689b-4dc82", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib9bd5f6f228", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:20:03.510569 env[1311]: 2024-12-13 14:20:03.492 [INFO][4085] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="2f1c69fd6c3022ecab959cd7fcfdc7195f90f0d55961886352f84b89ccbda8bf" Namespace="calico-apiserver" Pod="calico-apiserver-67dd9c689b-4dc82" WorkloadEndpoint="localhost-k8s-calico--apiserver--67dd9c689b--4dc82-eth0" Dec 13 14:20:03.510569 env[1311]: 2024-12-13 14:20:03.492 [INFO][4085] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib9bd5f6f228 ContainerID="2f1c69fd6c3022ecab959cd7fcfdc7195f90f0d55961886352f84b89ccbda8bf" Namespace="calico-apiserver" Pod="calico-apiserver-67dd9c689b-4dc82" WorkloadEndpoint="localhost-k8s-calico--apiserver--67dd9c689b--4dc82-eth0" Dec 13 14:20:03.510569 env[1311]: 2024-12-13 14:20:03.496 [INFO][4085] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2f1c69fd6c3022ecab959cd7fcfdc7195f90f0d55961886352f84b89ccbda8bf" Namespace="calico-apiserver" Pod="calico-apiserver-67dd9c689b-4dc82" WorkloadEndpoint="localhost-k8s-calico--apiserver--67dd9c689b--4dc82-eth0" Dec 13 14:20:03.510569 env[1311]: 2024-12-13 14:20:03.496 [INFO][4085] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2f1c69fd6c3022ecab959cd7fcfdc7195f90f0d55961886352f84b89ccbda8bf" Namespace="calico-apiserver" Pod="calico-apiserver-67dd9c689b-4dc82" WorkloadEndpoint="localhost-k8s-calico--apiserver--67dd9c689b--4dc82-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67dd9c689b--4dc82-eth0", GenerateName:"calico-apiserver-67dd9c689b-", Namespace:"calico-apiserver", SelfLink:"", UID:"3cc34ed8-34cd-4a26-9a00-bd15703328eb", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 19, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67dd9c689b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2f1c69fd6c3022ecab959cd7fcfdc7195f90f0d55961886352f84b89ccbda8bf", Pod:"calico-apiserver-67dd9c689b-4dc82", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib9bd5f6f228", MAC:"52:9c:a1:a9:f3:ec", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:20:03.510569 env[1311]: 2024-12-13 14:20:03.508 [INFO][4085] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2f1c69fd6c3022ecab959cd7fcfdc7195f90f0d55961886352f84b89ccbda8bf" Namespace="calico-apiserver" Pod="calico-apiserver-67dd9c689b-4dc82" WorkloadEndpoint="localhost-k8s-calico--apiserver--67dd9c689b--4dc82-eth0" Dec 13 14:20:03.522000 audit[4119]: NETFILTER_CFG table=filter:103 family=2 entries=44 op=nft_register_chain pid=4119 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:20:03.522000 audit[4119]: SYSCALL arch=c000003e syscall=46 success=yes exit=24368 a0=3 a1=7ffde16664a0 a2=0 a3=7ffde166648c items=0 ppid=3577 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:03.522000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:20:03.526540 env[1311]: time="2024-12-13T14:20:03.526446440Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:20:03.526540 env[1311]: time="2024-12-13T14:20:03.526489652Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:20:03.526540 env[1311]: time="2024-12-13T14:20:03.526500472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:20:03.526851 env[1311]: time="2024-12-13T14:20:03.526711097Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2f1c69fd6c3022ecab959cd7fcfdc7195f90f0d55961886352f84b89ccbda8bf pid=4128 runtime=io.containerd.runc.v2 Dec 13 14:20:03.544888 systemd[1]: run-containerd-runc-k8s.io-2f1c69fd6c3022ecab959cd7fcfdc7195f90f0d55961886352f84b89ccbda8bf-runc.BToRPX.mount: Deactivated successfully. Dec 13 14:20:03.556624 systemd-resolved[1229]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 14:20:03.581131 env[1311]: time="2024-12-13T14:20:03.581071925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67dd9c689b-4dc82,Uid:3cc34ed8-34cd-4a26-9a00-bd15703328eb,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"2f1c69fd6c3022ecab959cd7fcfdc7195f90f0d55961886352f84b89ccbda8bf\"" Dec 13 14:20:03.762414 kubelet[2218]: E1213 14:20:03.762379 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:03.772257 kubelet[2218]: I1213 14:20:03.772059 2218 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-77dkv" podStartSLOduration=37.771996618 podStartE2EDuration="37.771996618s" podCreationTimestamp="2024-12-13 14:19:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:20:03.771378949 +0000 UTC m=+50.594436882" watchObservedRunningTime="2024-12-13 14:20:03.771996618 +0000 UTC m=+50.595054521" Dec 13 14:20:03.798000 audit[4163]: NETFILTER_CFG table=filter:104 family=2 entries=16 op=nft_register_rule pid=4163 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:20:03.798000 audit[4163]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffcd4c79f20 a2=0 a3=7ffcd4c79f0c items=0 ppid=2379 pid=4163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:03.798000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:20:03.805000 audit[4163]: NETFILTER_CFG table=nat:105 family=2 entries=14 op=nft_register_rule pid=4163 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:20:03.805000 audit[4163]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffcd4c79f20 a2=0 a3=0 items=0 ppid=2379 pid=4163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:03.805000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:20:03.818000 audit[4165]: NETFILTER_CFG table=filter:106 family=2 entries=13 op=nft_register_rule pid=4165 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:20:03.818000 audit[4165]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffe71dafa20 a2=0 a3=7ffe71dafa0c items=0 ppid=2379 pid=4165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:03.818000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:20:03.827000 audit[4165]: NETFILTER_CFG table=nat:107 family=2 entries=35 op=nft_register_chain pid=4165 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:20:03.827000 audit[4165]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffe71dafa20 a2=0 a3=7ffe71dafa0c items=0 ppid=2379 pid=4165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:03.827000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:20:03.927224 systemd-networkd[1087]: calib107a93b52c: Gained IPv6LL Dec 13 14:20:04.183310 systemd-networkd[1087]: cali0f0c75667ae: Gained IPv6LL Dec 13 14:20:04.273039 env[1311]: time="2024-12-13T14:20:04.272942685Z" level=info msg="StopPodSandbox for \"1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9\"" Dec 13 14:20:04.352280 env[1311]: 2024-12-13 14:20:04.316 [INFO][4183] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9" Dec 13 14:20:04.352280 env[1311]: 2024-12-13 14:20:04.316 [INFO][4183] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9" iface="eth0" netns="/var/run/netns/cni-ecc15736-8cb2-e558-3e30-7379264e1854" Dec 13 14:20:04.352280 env[1311]: 2024-12-13 14:20:04.316 [INFO][4183] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9" iface="eth0" netns="/var/run/netns/cni-ecc15736-8cb2-e558-3e30-7379264e1854" Dec 13 14:20:04.352280 env[1311]: 2024-12-13 14:20:04.316 [INFO][4183] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9" iface="eth0" netns="/var/run/netns/cni-ecc15736-8cb2-e558-3e30-7379264e1854" Dec 13 14:20:04.352280 env[1311]: 2024-12-13 14:20:04.316 [INFO][4183] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9" Dec 13 14:20:04.352280 env[1311]: 2024-12-13 14:20:04.316 [INFO][4183] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9" Dec 13 14:20:04.352280 env[1311]: 2024-12-13 14:20:04.340 [INFO][4190] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9" HandleID="k8s-pod-network.1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9" Workload="localhost-k8s-calico--kube--controllers--77d57bc794--t48v6-eth0" Dec 13 14:20:04.352280 env[1311]: 2024-12-13 14:20:04.340 [INFO][4190] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:20:04.352280 env[1311]: 2024-12-13 14:20:04.341 [INFO][4190] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:20:04.352280 env[1311]: 2024-12-13 14:20:04.347 [WARNING][4190] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9" HandleID="k8s-pod-network.1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9" Workload="localhost-k8s-calico--kube--controllers--77d57bc794--t48v6-eth0" Dec 13 14:20:04.352280 env[1311]: 2024-12-13 14:20:04.347 [INFO][4190] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9" HandleID="k8s-pod-network.1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9" Workload="localhost-k8s-calico--kube--controllers--77d57bc794--t48v6-eth0" Dec 13 14:20:04.352280 env[1311]: 2024-12-13 14:20:04.348 [INFO][4190] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:20:04.352280 env[1311]: 2024-12-13 14:20:04.350 [INFO][4183] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9" Dec 13 14:20:04.352789 env[1311]: time="2024-12-13T14:20:04.352469482Z" level=info msg="TearDown network for sandbox \"1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9\" successfully" Dec 13 14:20:04.352789 env[1311]: time="2024-12-13T14:20:04.352519586Z" level=info msg="StopPodSandbox for \"1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9\" returns successfully" Dec 13 14:20:04.353365 env[1311]: time="2024-12-13T14:20:04.353311423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77d57bc794-t48v6,Uid:1706fd1e-e014-4076-a36c-7345b2980a9c,Namespace:calico-system,Attempt:1,}" Dec 13 14:20:04.355338 systemd[1]: run-netns-cni\x2decc15736\x2d8cb2\x2de558\x2d3e30\x2d7379264e1854.mount: Deactivated successfully. Dec 13 14:20:04.474095 systemd-networkd[1087]: calia4436d45805: Link UP Dec 13 14:20:04.477308 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:20:04.477355 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calia4436d45805: link becomes ready Dec 13 14:20:04.476994 systemd-networkd[1087]: calia4436d45805: Gained carrier Dec 13 14:20:04.487754 env[1311]: 2024-12-13 14:20:04.399 [INFO][4198] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--77d57bc794--t48v6-eth0 calico-kube-controllers-77d57bc794- calico-system 1706fd1e-e014-4076-a36c-7345b2980a9c 969 0 2024-12-13 14:19:34 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:77d57bc794 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-77d57bc794-t48v6 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia4436d45805 [] []}} ContainerID="53bec54d8043be6637ad3d284eb0d8bc0afd9fd69c3294bda54a43347cbb8011" Namespace="calico-system" Pod="calico-kube-controllers-77d57bc794-t48v6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77d57bc794--t48v6-" Dec 13 14:20:04.487754 env[1311]: 2024-12-13 14:20:04.399 [INFO][4198] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="53bec54d8043be6637ad3d284eb0d8bc0afd9fd69c3294bda54a43347cbb8011" Namespace="calico-system" Pod="calico-kube-controllers-77d57bc794-t48v6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77d57bc794--t48v6-eth0" Dec 13 14:20:04.487754 env[1311]: 2024-12-13 14:20:04.428 [INFO][4210] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="53bec54d8043be6637ad3d284eb0d8bc0afd9fd69c3294bda54a43347cbb8011" HandleID="k8s-pod-network.53bec54d8043be6637ad3d284eb0d8bc0afd9fd69c3294bda54a43347cbb8011" Workload="localhost-k8s-calico--kube--controllers--77d57bc794--t48v6-eth0" Dec 13 14:20:04.487754 env[1311]: 2024-12-13 14:20:04.438 [INFO][4210] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="53bec54d8043be6637ad3d284eb0d8bc0afd9fd69c3294bda54a43347cbb8011" HandleID="k8s-pod-network.53bec54d8043be6637ad3d284eb0d8bc0afd9fd69c3294bda54a43347cbb8011" Workload="localhost-k8s-calico--kube--controllers--77d57bc794--t48v6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000425bb0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-77d57bc794-t48v6", "timestamp":"2024-12-13 14:20:04.428888255 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 14:20:04.487754 env[1311]: 2024-12-13 14:20:04.438 [INFO][4210] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:20:04.487754 env[1311]: 2024-12-13 14:20:04.439 [INFO][4210] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:20:04.487754 env[1311]: 2024-12-13 14:20:04.439 [INFO][4210] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 14:20:04.487754 env[1311]: 2024-12-13 14:20:04.441 [INFO][4210] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.53bec54d8043be6637ad3d284eb0d8bc0afd9fd69c3294bda54a43347cbb8011" host="localhost" Dec 13 14:20:04.487754 env[1311]: 2024-12-13 14:20:04.444 [INFO][4210] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 14:20:04.487754 env[1311]: 2024-12-13 14:20:04.448 [INFO][4210] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 14:20:04.487754 env[1311]: 2024-12-13 14:20:04.450 [INFO][4210] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 14:20:04.487754 env[1311]: 2024-12-13 14:20:04.452 [INFO][4210] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 14:20:04.487754 env[1311]: 2024-12-13 14:20:04.453 [INFO][4210] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.53bec54d8043be6637ad3d284eb0d8bc0afd9fd69c3294bda54a43347cbb8011" host="localhost" Dec 13 14:20:04.487754 env[1311]: 2024-12-13 14:20:04.454 [INFO][4210] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.53bec54d8043be6637ad3d284eb0d8bc0afd9fd69c3294bda54a43347cbb8011 Dec 13 14:20:04.487754 env[1311]: 2024-12-13 14:20:04.461 [INFO][4210] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.53bec54d8043be6637ad3d284eb0d8bc0afd9fd69c3294bda54a43347cbb8011" host="localhost" Dec 13 14:20:04.487754 env[1311]: 2024-12-13 14:20:04.469 [INFO][4210] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.53bec54d8043be6637ad3d284eb0d8bc0afd9fd69c3294bda54a43347cbb8011" host="localhost" Dec 13 14:20:04.487754 env[1311]: 2024-12-13 14:20:04.469 [INFO][4210] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.53bec54d8043be6637ad3d284eb0d8bc0afd9fd69c3294bda54a43347cbb8011" host="localhost" Dec 13 14:20:04.487754 env[1311]: 2024-12-13 14:20:04.469 [INFO][4210] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:20:04.487754 env[1311]: 2024-12-13 14:20:04.469 [INFO][4210] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="53bec54d8043be6637ad3d284eb0d8bc0afd9fd69c3294bda54a43347cbb8011" HandleID="k8s-pod-network.53bec54d8043be6637ad3d284eb0d8bc0afd9fd69c3294bda54a43347cbb8011" Workload="localhost-k8s-calico--kube--controllers--77d57bc794--t48v6-eth0" Dec 13 14:20:04.488421 env[1311]: 2024-12-13 14:20:04.471 [INFO][4198] cni-plugin/k8s.go 386: Populated endpoint ContainerID="53bec54d8043be6637ad3d284eb0d8bc0afd9fd69c3294bda54a43347cbb8011" Namespace="calico-system" Pod="calico-kube-controllers-77d57bc794-t48v6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77d57bc794--t48v6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--77d57bc794--t48v6-eth0", GenerateName:"calico-kube-controllers-77d57bc794-", Namespace:"calico-system", SelfLink:"", UID:"1706fd1e-e014-4076-a36c-7345b2980a9c", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 19, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77d57bc794", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-77d57bc794-t48v6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia4436d45805", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:20:04.488421 env[1311]: 2024-12-13 14:20:04.472 [INFO][4198] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="53bec54d8043be6637ad3d284eb0d8bc0afd9fd69c3294bda54a43347cbb8011" Namespace="calico-system" Pod="calico-kube-controllers-77d57bc794-t48v6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77d57bc794--t48v6-eth0" Dec 13 14:20:04.488421 env[1311]: 2024-12-13 14:20:04.472 [INFO][4198] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia4436d45805 ContainerID="53bec54d8043be6637ad3d284eb0d8bc0afd9fd69c3294bda54a43347cbb8011" Namespace="calico-system" Pod="calico-kube-controllers-77d57bc794-t48v6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77d57bc794--t48v6-eth0" Dec 13 14:20:04.488421 env[1311]: 2024-12-13 14:20:04.476 [INFO][4198] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="53bec54d8043be6637ad3d284eb0d8bc0afd9fd69c3294bda54a43347cbb8011" Namespace="calico-system" Pod="calico-kube-controllers-77d57bc794-t48v6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77d57bc794--t48v6-eth0" Dec 13 14:20:04.488421 env[1311]: 2024-12-13 14:20:04.477 [INFO][4198] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="53bec54d8043be6637ad3d284eb0d8bc0afd9fd69c3294bda54a43347cbb8011" Namespace="calico-system" Pod="calico-kube-controllers-77d57bc794-t48v6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77d57bc794--t48v6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--77d57bc794--t48v6-eth0", GenerateName:"calico-kube-controllers-77d57bc794-", Namespace:"calico-system", SelfLink:"", UID:"1706fd1e-e014-4076-a36c-7345b2980a9c", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 19, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77d57bc794", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"53bec54d8043be6637ad3d284eb0d8bc0afd9fd69c3294bda54a43347cbb8011", Pod:"calico-kube-controllers-77d57bc794-t48v6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia4436d45805", MAC:"fa:be:e9:c3:4c:f3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:20:04.488421 env[1311]: 2024-12-13 14:20:04.486 [INFO][4198] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="53bec54d8043be6637ad3d284eb0d8bc0afd9fd69c3294bda54a43347cbb8011" Namespace="calico-system" Pod="calico-kube-controllers-77d57bc794-t48v6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--77d57bc794--t48v6-eth0" Dec 13 14:20:04.497000 audit[4231]: NETFILTER_CFG table=filter:108 family=2 entries=42 op=nft_register_chain pid=4231 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:20:04.497000 audit[4231]: SYSCALL arch=c000003e syscall=46 success=yes exit=21508 a0=3 a1=7ffdf84d4ae0 a2=0 a3=7ffdf84d4acc items=0 ppid=3577 pid=4231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:04.497000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:20:04.501868 env[1311]: time="2024-12-13T14:20:04.501777413Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:20:04.501868 env[1311]: time="2024-12-13T14:20:04.501839992Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:20:04.501868 env[1311]: time="2024-12-13T14:20:04.501855631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:20:04.502144 env[1311]: time="2024-12-13T14:20:04.502089169Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/53bec54d8043be6637ad3d284eb0d8bc0afd9fd69c3294bda54a43347cbb8011 pid=4239 runtime=io.containerd.runc.v2 Dec 13 14:20:04.521851 systemd[1]: run-containerd-runc-k8s.io-53bec54d8043be6637ad3d284eb0d8bc0afd9fd69c3294bda54a43347cbb8011-runc.aesTa4.mount: Deactivated successfully. Dec 13 14:20:04.535212 systemd-resolved[1229]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 14:20:04.567571 env[1311]: time="2024-12-13T14:20:04.567516812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-77d57bc794-t48v6,Uid:1706fd1e-e014-4076-a36c-7345b2980a9c,Namespace:calico-system,Attempt:1,} returns sandbox id \"53bec54d8043be6637ad3d284eb0d8bc0afd9fd69c3294bda54a43347cbb8011\"" Dec 13 14:20:04.767739 kubelet[2218]: E1213 14:20:04.767699 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:05.272323 env[1311]: time="2024-12-13T14:20:05.272268960Z" level=info msg="StopPodSandbox for \"c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a\"" Dec 13 14:20:05.337504 systemd-networkd[1087]: calib9bd5f6f228: Gained IPv6LL Dec 13 14:20:05.356032 env[1311]: 2024-12-13 14:20:05.320 [INFO][4294] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a" Dec 13 14:20:05.356032 env[1311]: 2024-12-13 14:20:05.320 [INFO][4294] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a" iface="eth0" netns="/var/run/netns/cni-37cd95d5-fd1a-6085-be6c-eb7f8b90189d" Dec 13 14:20:05.356032 env[1311]: 2024-12-13 14:20:05.320 [INFO][4294] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a" iface="eth0" netns="/var/run/netns/cni-37cd95d5-fd1a-6085-be6c-eb7f8b90189d" Dec 13 14:20:05.356032 env[1311]: 2024-12-13 14:20:05.320 [INFO][4294] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a" iface="eth0" netns="/var/run/netns/cni-37cd95d5-fd1a-6085-be6c-eb7f8b90189d" Dec 13 14:20:05.356032 env[1311]: 2024-12-13 14:20:05.320 [INFO][4294] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a" Dec 13 14:20:05.356032 env[1311]: 2024-12-13 14:20:05.320 [INFO][4294] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a" Dec 13 14:20:05.356032 env[1311]: 2024-12-13 14:20:05.343 [INFO][4301] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a" HandleID="k8s-pod-network.c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a" Workload="localhost-k8s-coredns--76f75df574--255rl-eth0" Dec 13 14:20:05.356032 env[1311]: 2024-12-13 14:20:05.344 [INFO][4301] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:20:05.356032 env[1311]: 2024-12-13 14:20:05.344 [INFO][4301] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:20:05.356032 env[1311]: 2024-12-13 14:20:05.350 [WARNING][4301] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a" HandleID="k8s-pod-network.c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a" Workload="localhost-k8s-coredns--76f75df574--255rl-eth0" Dec 13 14:20:05.356032 env[1311]: 2024-12-13 14:20:05.350 [INFO][4301] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a" HandleID="k8s-pod-network.c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a" Workload="localhost-k8s-coredns--76f75df574--255rl-eth0" Dec 13 14:20:05.356032 env[1311]: 2024-12-13 14:20:05.352 [INFO][4301] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:20:05.356032 env[1311]: 2024-12-13 14:20:05.354 [INFO][4294] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a" Dec 13 14:20:05.356911 env[1311]: time="2024-12-13T14:20:05.356311564Z" level=info msg="TearDown network for sandbox \"c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a\" successfully" Dec 13 14:20:05.356911 env[1311]: time="2024-12-13T14:20:05.356360837Z" level=info msg="StopPodSandbox for \"c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a\" returns successfully" Dec 13 14:20:05.356989 kubelet[2218]: E1213 14:20:05.356740 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:05.357524 env[1311]: time="2024-12-13T14:20:05.357489706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-255rl,Uid:0a2e947b-99c4-4ed5-a3f2-f248230d88f2,Namespace:kube-system,Attempt:1,}" Dec 13 14:20:05.388586 systemd[1]: run-netns-cni\x2d37cd95d5\x2dfd1a\x2d6085\x2dbe6c\x2deb7f8b90189d.mount: Deactivated successfully. Dec 13 14:20:05.489863 systemd-networkd[1087]: cali976d2d0123a: Link UP Dec 13 14:20:05.492740 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:20:05.492813 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali976d2d0123a: link becomes ready Dec 13 14:20:05.492978 systemd-networkd[1087]: cali976d2d0123a: Gained carrier Dec 13 14:20:05.506426 env[1311]: 2024-12-13 14:20:05.422 [INFO][4310] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--255rl-eth0 coredns-76f75df574- kube-system 0a2e947b-99c4-4ed5-a3f2-f248230d88f2 979 0 2024-12-13 14:19:26 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-255rl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali976d2d0123a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="07ec8daea893216dd885eedff13345b85ba4bf1baa012bc2140207312fd00752" Namespace="kube-system" Pod="coredns-76f75df574-255rl" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--255rl-" Dec 13 14:20:05.506426 env[1311]: 2024-12-13 14:20:05.422 [INFO][4310] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="07ec8daea893216dd885eedff13345b85ba4bf1baa012bc2140207312fd00752" Namespace="kube-system" Pod="coredns-76f75df574-255rl" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--255rl-eth0" Dec 13 14:20:05.506426 env[1311]: 2024-12-13 14:20:05.448 [INFO][4321] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="07ec8daea893216dd885eedff13345b85ba4bf1baa012bc2140207312fd00752" HandleID="k8s-pod-network.07ec8daea893216dd885eedff13345b85ba4bf1baa012bc2140207312fd00752" Workload="localhost-k8s-coredns--76f75df574--255rl-eth0" Dec 13 14:20:05.506426 env[1311]: 2024-12-13 14:20:05.458 [INFO][4321] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="07ec8daea893216dd885eedff13345b85ba4bf1baa012bc2140207312fd00752" HandleID="k8s-pod-network.07ec8daea893216dd885eedff13345b85ba4bf1baa012bc2140207312fd00752" Workload="localhost-k8s-coredns--76f75df574--255rl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003057c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-255rl", "timestamp":"2024-12-13 14:20:05.448318727 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 14:20:05.506426 env[1311]: 2024-12-13 14:20:05.458 [INFO][4321] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:20:05.506426 env[1311]: 2024-12-13 14:20:05.459 [INFO][4321] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:20:05.506426 env[1311]: 2024-12-13 14:20:05.459 [INFO][4321] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 14:20:05.506426 env[1311]: 2024-12-13 14:20:05.461 [INFO][4321] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.07ec8daea893216dd885eedff13345b85ba4bf1baa012bc2140207312fd00752" host="localhost" Dec 13 14:20:05.506426 env[1311]: 2024-12-13 14:20:05.465 [INFO][4321] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 14:20:05.506426 env[1311]: 2024-12-13 14:20:05.469 [INFO][4321] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 14:20:05.506426 env[1311]: 2024-12-13 14:20:05.470 [INFO][4321] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 14:20:05.506426 env[1311]: 2024-12-13 14:20:05.472 [INFO][4321] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 14:20:05.506426 env[1311]: 2024-12-13 14:20:05.472 [INFO][4321] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.07ec8daea893216dd885eedff13345b85ba4bf1baa012bc2140207312fd00752" host="localhost" Dec 13 14:20:05.506426 env[1311]: 2024-12-13 14:20:05.474 [INFO][4321] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.07ec8daea893216dd885eedff13345b85ba4bf1baa012bc2140207312fd00752 Dec 13 14:20:05.506426 env[1311]: 2024-12-13 14:20:05.478 [INFO][4321] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.07ec8daea893216dd885eedff13345b85ba4bf1baa012bc2140207312fd00752" host="localhost" Dec 13 14:20:05.506426 env[1311]: 2024-12-13 14:20:05.485 [INFO][4321] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.07ec8daea893216dd885eedff13345b85ba4bf1baa012bc2140207312fd00752" host="localhost" Dec 13 14:20:05.506426 env[1311]: 2024-12-13 14:20:05.485 [INFO][4321] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.07ec8daea893216dd885eedff13345b85ba4bf1baa012bc2140207312fd00752" host="localhost" Dec 13 14:20:05.506426 env[1311]: 2024-12-13 14:20:05.486 [INFO][4321] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:20:05.506426 env[1311]: 2024-12-13 14:20:05.486 [INFO][4321] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="07ec8daea893216dd885eedff13345b85ba4bf1baa012bc2140207312fd00752" HandleID="k8s-pod-network.07ec8daea893216dd885eedff13345b85ba4bf1baa012bc2140207312fd00752" Workload="localhost-k8s-coredns--76f75df574--255rl-eth0" Dec 13 14:20:05.507442 env[1311]: 2024-12-13 14:20:05.487 [INFO][4310] cni-plugin/k8s.go 386: Populated endpoint ContainerID="07ec8daea893216dd885eedff13345b85ba4bf1baa012bc2140207312fd00752" Namespace="kube-system" Pod="coredns-76f75df574-255rl" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--255rl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--255rl-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"0a2e947b-99c4-4ed5-a3f2-f248230d88f2", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 19, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-255rl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali976d2d0123a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:20:05.507442 env[1311]: 2024-12-13 14:20:05.488 [INFO][4310] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="07ec8daea893216dd885eedff13345b85ba4bf1baa012bc2140207312fd00752" Namespace="kube-system" Pod="coredns-76f75df574-255rl" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--255rl-eth0" Dec 13 14:20:05.507442 env[1311]: 2024-12-13 14:20:05.488 [INFO][4310] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali976d2d0123a ContainerID="07ec8daea893216dd885eedff13345b85ba4bf1baa012bc2140207312fd00752" Namespace="kube-system" Pod="coredns-76f75df574-255rl" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--255rl-eth0" Dec 13 14:20:05.507442 env[1311]: 2024-12-13 14:20:05.493 [INFO][4310] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="07ec8daea893216dd885eedff13345b85ba4bf1baa012bc2140207312fd00752" Namespace="kube-system" Pod="coredns-76f75df574-255rl" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--255rl-eth0" Dec 13 14:20:05.507442 env[1311]: 2024-12-13 14:20:05.493 [INFO][4310] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="07ec8daea893216dd885eedff13345b85ba4bf1baa012bc2140207312fd00752" Namespace="kube-system" Pod="coredns-76f75df574-255rl" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--255rl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--255rl-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"0a2e947b-99c4-4ed5-a3f2-f248230d88f2", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 19, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"07ec8daea893216dd885eedff13345b85ba4bf1baa012bc2140207312fd00752", Pod:"coredns-76f75df574-255rl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali976d2d0123a", MAC:"66:56:01:88:b1:1a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:20:05.507442 env[1311]: 2024-12-13 14:20:05.504 [INFO][4310] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="07ec8daea893216dd885eedff13345b85ba4bf1baa012bc2140207312fd00752" Namespace="kube-system" Pod="coredns-76f75df574-255rl" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--255rl-eth0" Dec 13 14:20:05.517000 audit[4345]: NETFILTER_CFG table=filter:109 family=2 entries=38 op=nft_register_chain pid=4345 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:20:05.519135 env[1311]: time="2024-12-13T14:20:05.519064465Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:20:05.519309 env[1311]: time="2024-12-13T14:20:05.519278327Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:20:05.519458 env[1311]: time="2024-12-13T14:20:05.519427747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:20:05.519648 kernel: kauditd_printk_skb: 568 callbacks suppressed Dec 13 14:20:05.519702 kernel: audit: type=1325 audit(1734099605.517:454): table=filter:109 family=2 entries=38 op=nft_register_chain pid=4345 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:20:05.520207 env[1311]: time="2024-12-13T14:20:05.520155563Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/07ec8daea893216dd885eedff13345b85ba4bf1baa012bc2140207312fd00752 pid=4349 runtime=io.containerd.runc.v2 Dec 13 14:20:05.517000 audit[4345]: SYSCALL arch=c000003e syscall=46 success=yes exit=19392 a0=3 a1=7ffe5962bf70 a2=0 a3=7ffe5962bf5c items=0 ppid=3577 pid=4345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:05.533051 kernel: audit: type=1300 audit(1734099605.517:454): arch=c000003e syscall=46 success=yes exit=19392 a0=3 a1=7ffe5962bf70 a2=0 a3=7ffe5962bf5c items=0 ppid=3577 pid=4345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:05.517000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:20:05.537039 kernel: audit: type=1327 audit(1734099605.517:454): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:20:05.545913 systemd[1]: run-containerd-runc-k8s.io-07ec8daea893216dd885eedff13345b85ba4bf1baa012bc2140207312fd00752-runc.wSY3lW.mount: Deactivated successfully. Dec 13 14:20:05.558097 systemd-resolved[1229]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 14:20:05.582664 env[1311]: time="2024-12-13T14:20:05.582614018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-255rl,Uid:0a2e947b-99c4-4ed5-a3f2-f248230d88f2,Namespace:kube-system,Attempt:1,} returns sandbox id \"07ec8daea893216dd885eedff13345b85ba4bf1baa012bc2140207312fd00752\"" Dec 13 14:20:05.583725 kubelet[2218]: E1213 14:20:05.583703 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:05.585385 env[1311]: time="2024-12-13T14:20:05.585344122Z" level=info msg="CreateContainer within sandbox \"07ec8daea893216dd885eedff13345b85ba4bf1baa012bc2140207312fd00752\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:20:05.602456 env[1311]: time="2024-12-13T14:20:05.602323779Z" level=info msg="CreateContainer within sandbox \"07ec8daea893216dd885eedff13345b85ba4bf1baa012bc2140207312fd00752\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0d0a870758eba88b923f2d848c14fa6f74e5e4958c1fc71577c14244f2544a09\"" Dec 13 14:20:05.603141 env[1311]: time="2024-12-13T14:20:05.603117369Z" level=info msg="StartContainer for \"0d0a870758eba88b923f2d848c14fa6f74e5e4958c1fc71577c14244f2544a09\"" Dec 13 14:20:05.647468 env[1311]: time="2024-12-13T14:20:05.647414277Z" level=info msg="StartContainer for \"0d0a870758eba88b923f2d848c14fa6f74e5e4958c1fc71577c14244f2544a09\" returns successfully" Dec 13 14:20:05.719260 systemd-networkd[1087]: calia4436d45805: Gained IPv6LL Dec 13 14:20:05.771058 kubelet[2218]: E1213 14:20:05.770924 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:05.771486 kubelet[2218]: E1213 14:20:05.771100 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:05.839859 kubelet[2218]: I1213 14:20:05.839377 2218 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-255rl" podStartSLOduration=39.839324642 podStartE2EDuration="39.839324642s" podCreationTimestamp="2024-12-13 14:19:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:20:05.810109008 +0000 UTC m=+52.633166921" watchObservedRunningTime="2024-12-13 14:20:05.839324642 +0000 UTC m=+52.662382555" Dec 13 14:20:05.943000 audit[4423]: NETFILTER_CFG table=filter:110 family=2 entries=10 op=nft_register_rule pid=4423 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:20:05.943000 audit[4423]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffdc07513b0 a2=0 a3=7ffdc075139c items=0 ppid=2379 pid=4423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:05.963072 kernel: audit: type=1325 audit(1734099605.943:455): table=filter:110 family=2 entries=10 op=nft_register_rule pid=4423 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:20:05.963225 kernel: audit: type=1300 audit(1734099605.943:455): arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffdc07513b0 a2=0 a3=7ffdc075139c items=0 ppid=2379 pid=4423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:05.943000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:20:05.966689 kernel: audit: type=1327 audit(1734099605.943:455): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:20:05.971000 audit[4423]: NETFILTER_CFG table=nat:111 family=2 entries=44 op=nft_register_rule pid=4423 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:20:05.971000 audit[4423]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffdc07513b0 a2=0 a3=7ffdc075139c items=0 ppid=2379 pid=4423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:05.986341 kernel: audit: type=1325 audit(1734099605.971:456): table=nat:111 family=2 entries=44 op=nft_register_rule pid=4423 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:20:05.986390 kernel: audit: type=1300 audit(1734099605.971:456): arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffdc07513b0 a2=0 a3=7ffdc075139c items=0 ppid=2379 pid=4423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:05.986420 kernel: audit: type=1327 audit(1734099605.971:456): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:20:05.971000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:20:06.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.24:22-10.0.0.1:46398 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:06.232685 systemd[1]: Started sshd@13-10.0.0.24:22-10.0.0.1:46398.service. Dec 13 14:20:06.239059 kernel: audit: type=1130 audit(1734099606.231:457): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.24:22-10.0.0.1:46398 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:06.274236 env[1311]: time="2024-12-13T14:20:06.274193244Z" level=info msg="StopPodSandbox for \"be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8\"" Dec 13 14:20:06.277000 audit[4424]: USER_ACCT pid=4424 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:06.278452 sshd[4424]: Accepted publickey for core from 10.0.0.1 port 46398 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg Dec 13 14:20:06.278000 audit[4424]: CRED_ACQ pid=4424 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:06.278000 audit[4424]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe86cb1330 a2=3 a3=0 items=0 ppid=1 pid=4424 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:06.278000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:20:06.280156 sshd[4424]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:20:06.286716 systemd-logind[1295]: New session 14 of user core. Dec 13 14:20:06.286761 systemd[1]: Started session-14.scope. Dec 13 14:20:06.295000 audit[4424]: USER_START pid=4424 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:06.297000 audit[4449]: CRED_ACQ pid=4449 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:06.378364 env[1311]: 2024-12-13 14:20:06.329 [INFO][4442] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8" Dec 13 14:20:06.378364 env[1311]: 2024-12-13 14:20:06.330 [INFO][4442] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8" iface="eth0" netns="/var/run/netns/cni-8d95f913-e07f-70f1-786f-ba6cee2038b3" Dec 13 14:20:06.378364 env[1311]: 2024-12-13 14:20:06.330 [INFO][4442] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8" iface="eth0" netns="/var/run/netns/cni-8d95f913-e07f-70f1-786f-ba6cee2038b3" Dec 13 14:20:06.378364 env[1311]: 2024-12-13 14:20:06.330 [INFO][4442] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8" iface="eth0" netns="/var/run/netns/cni-8d95f913-e07f-70f1-786f-ba6cee2038b3" Dec 13 14:20:06.378364 env[1311]: 2024-12-13 14:20:06.330 [INFO][4442] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8" Dec 13 14:20:06.378364 env[1311]: 2024-12-13 14:20:06.330 [INFO][4442] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8" Dec 13 14:20:06.378364 env[1311]: 2024-12-13 14:20:06.364 [INFO][4451] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8" HandleID="k8s-pod-network.be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8" Workload="localhost-k8s-csi--node--driver--g44h2-eth0" Dec 13 14:20:06.378364 env[1311]: 2024-12-13 14:20:06.364 [INFO][4451] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:20:06.378364 env[1311]: 2024-12-13 14:20:06.364 [INFO][4451] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:20:06.378364 env[1311]: 2024-12-13 14:20:06.370 [WARNING][4451] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8" HandleID="k8s-pod-network.be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8" Workload="localhost-k8s-csi--node--driver--g44h2-eth0" Dec 13 14:20:06.378364 env[1311]: 2024-12-13 14:20:06.370 [INFO][4451] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8" HandleID="k8s-pod-network.be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8" Workload="localhost-k8s-csi--node--driver--g44h2-eth0" Dec 13 14:20:06.378364 env[1311]: 2024-12-13 14:20:06.373 [INFO][4451] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:20:06.378364 env[1311]: 2024-12-13 14:20:06.376 [INFO][4442] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8" Dec 13 14:20:06.379359 env[1311]: time="2024-12-13T14:20:06.378585381Z" level=info msg="TearDown network for sandbox \"be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8\" successfully" Dec 13 14:20:06.379359 env[1311]: time="2024-12-13T14:20:06.378631137Z" level=info msg="StopPodSandbox for \"be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8\" returns successfully" Dec 13 14:20:06.379491 env[1311]: time="2024-12-13T14:20:06.379445205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g44h2,Uid:b485a5b2-c009-42f0-8598-051c15f90fca,Namespace:calico-system,Attempt:1,}" Dec 13 14:20:06.390300 systemd[1]: run-netns-cni\x2d8d95f913\x2de07f\x2d70f1\x2d786f\x2dba6cee2038b3.mount: Deactivated successfully. Dec 13 14:20:06.448073 sshd[4424]: pam_unix(sshd:session): session closed for user core Dec 13 14:20:06.448000 audit[4424]: USER_END pid=4424 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:06.448000 audit[4424]: CRED_DISP pid=4424 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:06.451000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.24:22-10.0.0.1:46398 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:06.451897 systemd[1]: sshd@13-10.0.0.24:22-10.0.0.1:46398.service: Deactivated successfully. Dec 13 14:20:06.452941 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 14:20:06.453742 systemd-logind[1295]: Session 14 logged out. Waiting for processes to exit. Dec 13 14:20:06.454740 systemd-logind[1295]: Removed session 14. Dec 13 14:20:06.565490 systemd-networkd[1087]: cali37b8ed04a9e: Link UP Dec 13 14:20:06.590942 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:20:06.591104 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali37b8ed04a9e: link becomes ready Dec 13 14:20:06.591465 systemd-networkd[1087]: cali37b8ed04a9e: Gained carrier Dec 13 14:20:06.669835 env[1311]: 2024-12-13 14:20:06.441 [INFO][4468] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--g44h2-eth0 csi-node-driver- calico-system b485a5b2-c009-42f0-8598-051c15f90fca 997 0 2024-12-13 14:19:34 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-g44h2 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali37b8ed04a9e [] []}} ContainerID="f8ee8a4896d289df1227268254b296be28671a78f85f2cae3348a224a16641ce" Namespace="calico-system" Pod="csi-node-driver-g44h2" WorkloadEndpoint="localhost-k8s-csi--node--driver--g44h2-" Dec 13 14:20:06.669835 env[1311]: 2024-12-13 14:20:06.441 [INFO][4468] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f8ee8a4896d289df1227268254b296be28671a78f85f2cae3348a224a16641ce" Namespace="calico-system" Pod="csi-node-driver-g44h2" WorkloadEndpoint="localhost-k8s-csi--node--driver--g44h2-eth0" Dec 13 14:20:06.669835 env[1311]: 2024-12-13 14:20:06.478 [INFO][4482] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f8ee8a4896d289df1227268254b296be28671a78f85f2cae3348a224a16641ce" HandleID="k8s-pod-network.f8ee8a4896d289df1227268254b296be28671a78f85f2cae3348a224a16641ce" Workload="localhost-k8s-csi--node--driver--g44h2-eth0" Dec 13 14:20:06.669835 env[1311]: 2024-12-13 14:20:06.487 [INFO][4482] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f8ee8a4896d289df1227268254b296be28671a78f85f2cae3348a224a16641ce" HandleID="k8s-pod-network.f8ee8a4896d289df1227268254b296be28671a78f85f2cae3348a224a16641ce" Workload="localhost-k8s-csi--node--driver--g44h2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002958c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-g44h2", "timestamp":"2024-12-13 14:20:06.478735518 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 14:20:06.669835 env[1311]: 2024-12-13 14:20:06.487 [INFO][4482] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:20:06.669835 env[1311]: 2024-12-13 14:20:06.487 [INFO][4482] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:20:06.669835 env[1311]: 2024-12-13 14:20:06.487 [INFO][4482] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 14:20:06.669835 env[1311]: 2024-12-13 14:20:06.489 [INFO][4482] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f8ee8a4896d289df1227268254b296be28671a78f85f2cae3348a224a16641ce" host="localhost" Dec 13 14:20:06.669835 env[1311]: 2024-12-13 14:20:06.492 [INFO][4482] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 14:20:06.669835 env[1311]: 2024-12-13 14:20:06.497 [INFO][4482] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 14:20:06.669835 env[1311]: 2024-12-13 14:20:06.499 [INFO][4482] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 14:20:06.669835 env[1311]: 2024-12-13 14:20:06.503 [INFO][4482] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 14:20:06.669835 env[1311]: 2024-12-13 14:20:06.503 [INFO][4482] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f8ee8a4896d289df1227268254b296be28671a78f85f2cae3348a224a16641ce" host="localhost" Dec 13 14:20:06.669835 env[1311]: 2024-12-13 14:20:06.505 [INFO][4482] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f8ee8a4896d289df1227268254b296be28671a78f85f2cae3348a224a16641ce Dec 13 14:20:06.669835 env[1311]: 2024-12-13 14:20:06.516 [INFO][4482] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f8ee8a4896d289df1227268254b296be28671a78f85f2cae3348a224a16641ce" host="localhost" Dec 13 14:20:06.669835 env[1311]: 2024-12-13 14:20:06.560 [INFO][4482] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.f8ee8a4896d289df1227268254b296be28671a78f85f2cae3348a224a16641ce" host="localhost" Dec 13 14:20:06.669835 env[1311]: 2024-12-13 14:20:06.561 [INFO][4482] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.f8ee8a4896d289df1227268254b296be28671a78f85f2cae3348a224a16641ce" host="localhost" Dec 13 14:20:06.669835 env[1311]: 2024-12-13 14:20:06.561 [INFO][4482] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:20:06.669835 env[1311]: 2024-12-13 14:20:06.561 [INFO][4482] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="f8ee8a4896d289df1227268254b296be28671a78f85f2cae3348a224a16641ce" HandleID="k8s-pod-network.f8ee8a4896d289df1227268254b296be28671a78f85f2cae3348a224a16641ce" Workload="localhost-k8s-csi--node--driver--g44h2-eth0" Dec 13 14:20:06.670609 env[1311]: 2024-12-13 14:20:06.563 [INFO][4468] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f8ee8a4896d289df1227268254b296be28671a78f85f2cae3348a224a16641ce" Namespace="calico-system" Pod="csi-node-driver-g44h2" WorkloadEndpoint="localhost-k8s-csi--node--driver--g44h2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--g44h2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b485a5b2-c009-42f0-8598-051c15f90fca", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 19, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-g44h2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali37b8ed04a9e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:20:06.670609 env[1311]: 2024-12-13 14:20:06.563 [INFO][4468] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="f8ee8a4896d289df1227268254b296be28671a78f85f2cae3348a224a16641ce" Namespace="calico-system" Pod="csi-node-driver-g44h2" WorkloadEndpoint="localhost-k8s-csi--node--driver--g44h2-eth0" Dec 13 14:20:06.670609 env[1311]: 2024-12-13 14:20:06.563 [INFO][4468] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali37b8ed04a9e ContainerID="f8ee8a4896d289df1227268254b296be28671a78f85f2cae3348a224a16641ce" Namespace="calico-system" Pod="csi-node-driver-g44h2" WorkloadEndpoint="localhost-k8s-csi--node--driver--g44h2-eth0" Dec 13 14:20:06.670609 env[1311]: 2024-12-13 14:20:06.593 [INFO][4468] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f8ee8a4896d289df1227268254b296be28671a78f85f2cae3348a224a16641ce" Namespace="calico-system" Pod="csi-node-driver-g44h2" WorkloadEndpoint="localhost-k8s-csi--node--driver--g44h2-eth0" Dec 13 14:20:06.670609 env[1311]: 2024-12-13 14:20:06.593 [INFO][4468] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f8ee8a4896d289df1227268254b296be28671a78f85f2cae3348a224a16641ce" Namespace="calico-system" Pod="csi-node-driver-g44h2" WorkloadEndpoint="localhost-k8s-csi--node--driver--g44h2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--g44h2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b485a5b2-c009-42f0-8598-051c15f90fca", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 19, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f8ee8a4896d289df1227268254b296be28671a78f85f2cae3348a224a16641ce", Pod:"csi-node-driver-g44h2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali37b8ed04a9e", MAC:"6a:63:a8:29:ec:e6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:20:06.670609 env[1311]: 2024-12-13 14:20:06.667 [INFO][4468] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f8ee8a4896d289df1227268254b296be28671a78f85f2cae3348a224a16641ce" Namespace="calico-system" Pod="csi-node-driver-g44h2" WorkloadEndpoint="localhost-k8s-csi--node--driver--g44h2-eth0" Dec 13 14:20:06.680000 audit[4504]: NETFILTER_CFG table=filter:112 family=2 entries=46 op=nft_register_chain pid=4504 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:20:06.680000 audit[4504]: SYSCALL arch=c000003e syscall=46 success=yes exit=22188 a0=3 a1=7fffbfee9b40 a2=0 a3=7fffbfee9b2c items=0 ppid=3577 pid=4504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:06.680000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:20:06.701901 env[1311]: time="2024-12-13T14:20:06.701752554Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:20:06.701901 env[1311]: time="2024-12-13T14:20:06.701832754Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:20:06.701901 env[1311]: time="2024-12-13T14:20:06.701847622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:20:06.702238 env[1311]: time="2024-12-13T14:20:06.702076743Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f8ee8a4896d289df1227268254b296be28671a78f85f2cae3348a224a16641ce pid=4512 runtime=io.containerd.runc.v2 Dec 13 14:20:06.729291 systemd-resolved[1229]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 14:20:06.741856 env[1311]: time="2024-12-13T14:20:06.741812817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-g44h2,Uid:b485a5b2-c009-42f0-8598-051c15f90fca,Namespace:calico-system,Attempt:1,} returns sandbox id \"f8ee8a4896d289df1227268254b296be28671a78f85f2cae3348a224a16641ce\"" Dec 13 14:20:06.774796 kubelet[2218]: E1213 14:20:06.774689 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:06.933000 audit[4546]: NETFILTER_CFG table=filter:113 family=2 entries=10 op=nft_register_rule pid=4546 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:20:06.933000 audit[4546]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffef0b4e7c0 a2=0 a3=7ffef0b4e7ac items=0 ppid=2379 pid=4546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:06.933000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:20:06.991000 audit[4546]: NETFILTER_CFG table=nat:114 family=2 entries=56 op=nft_register_chain pid=4546 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:20:06.991000 audit[4546]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffef0b4e7c0 a2=0 a3=7ffef0b4e7ac items=0 ppid=2379 pid=4546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:06.991000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:20:07.001696 systemd-networkd[1087]: cali976d2d0123a: Gained IPv6LL Dec 13 14:20:07.386857 systemd[1]: run-containerd-runc-k8s.io-f8ee8a4896d289df1227268254b296be28671a78f85f2cae3348a224a16641ce-runc.fQh9zY.mount: Deactivated successfully. Dec 13 14:20:07.776690 kubelet[2218]: E1213 14:20:07.776634 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:07.991235 env[1311]: time="2024-12-13T14:20:07.991155959Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:07.994078 env[1311]: time="2024-12-13T14:20:07.994037487Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:07.996339 env[1311]: time="2024-12-13T14:20:07.996279745Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:07.998606 env[1311]: time="2024-12-13T14:20:07.998525941Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:07.998876 env[1311]: time="2024-12-13T14:20:07.998824891Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 14:20:08.000378 env[1311]: time="2024-12-13T14:20:08.000296864Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 14:20:08.001910 env[1311]: time="2024-12-13T14:20:08.001847173Z" level=info msg="CreateContainer within sandbox \"ebbb2bf361d79d220cb412bfedae470e8bcec2f90873fe7f7cf03bd47ebfb52d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 14:20:08.235006 env[1311]: time="2024-12-13T14:20:08.234932646Z" level=info msg="CreateContainer within sandbox \"ebbb2bf361d79d220cb412bfedae470e8bcec2f90873fe7f7cf03bd47ebfb52d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"75b94aa91d4e87a9ef44ee7edd04b943e45d62f3b4b253abd9ce50918cbff7ae\"" Dec 13 14:20:08.235654 env[1311]: time="2024-12-13T14:20:08.235606511Z" level=info msg="StartContainer for \"75b94aa91d4e87a9ef44ee7edd04b943e45d62f3b4b253abd9ce50918cbff7ae\"" Dec 13 14:20:08.297858 env[1311]: time="2024-12-13T14:20:08.297791606Z" level=info msg="StartContainer for \"75b94aa91d4e87a9ef44ee7edd04b943e45d62f3b4b253abd9ce50918cbff7ae\" returns successfully" Dec 13 14:20:08.535228 systemd-networkd[1087]: cali37b8ed04a9e: Gained IPv6LL Dec 13 14:20:08.571122 env[1311]: time="2024-12-13T14:20:08.570971649Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:08.575597 env[1311]: time="2024-12-13T14:20:08.575499215Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:08.578213 env[1311]: time="2024-12-13T14:20:08.578103543Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:08.579912 env[1311]: time="2024-12-13T14:20:08.579882562Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:08.580485 env[1311]: time="2024-12-13T14:20:08.580428976Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 14:20:08.585453 env[1311]: time="2024-12-13T14:20:08.585386069Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 14:20:08.586682 env[1311]: time="2024-12-13T14:20:08.586617610Z" level=info msg="CreateContainer within sandbox \"2f1c69fd6c3022ecab959cd7fcfdc7195f90f0d55961886352f84b89ccbda8bf\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 14:20:08.609193 env[1311]: time="2024-12-13T14:20:08.609146116Z" level=info msg="CreateContainer within sandbox \"2f1c69fd6c3022ecab959cd7fcfdc7195f90f0d55961886352f84b89ccbda8bf\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f54cfbf91f3c537a301a26b884ba611d1722102e0b8bd05cac41358959c1a957\"" Dec 13 14:20:08.609795 env[1311]: time="2024-12-13T14:20:08.609765198Z" level=info msg="StartContainer for \"f54cfbf91f3c537a301a26b884ba611d1722102e0b8bd05cac41358959c1a957\"" Dec 13 14:20:08.642063 systemd[1]: run-containerd-runc-k8s.io-f54cfbf91f3c537a301a26b884ba611d1722102e0b8bd05cac41358959c1a957-runc.VSrsVi.mount: Deactivated successfully. Dec 13 14:20:08.763777 env[1311]: time="2024-12-13T14:20:08.763696462Z" level=info msg="StartContainer for \"f54cfbf91f3c537a301a26b884ba611d1722102e0b8bd05cac41358959c1a957\" returns successfully" Dec 13 14:20:08.782371 kubelet[2218]: E1213 14:20:08.782340 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:08.925000 audit[4625]: NETFILTER_CFG table=filter:115 family=2 entries=10 op=nft_register_rule pid=4625 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:20:08.925000 audit[4625]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffe585ee600 a2=0 a3=7ffe585ee5ec items=0 ppid=2379 pid=4625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:08.925000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:20:08.931000 audit[4625]: NETFILTER_CFG table=nat:116 family=2 entries=20 op=nft_register_rule pid=4625 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:20:08.931000 audit[4625]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe585ee600 a2=0 a3=7ffe585ee5ec items=0 ppid=2379 pid=4625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:08.931000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:20:08.934044 kubelet[2218]: I1213 14:20:08.933981 2218 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-67dd9c689b-4dc82" podStartSLOduration=29.935186778 podStartE2EDuration="34.933926652s" podCreationTimestamp="2024-12-13 14:19:34 +0000 UTC" firstStartedPulling="2024-12-13 14:20:03.582248072 +0000 UTC m=+50.405305985" lastFinishedPulling="2024-12-13 14:20:08.580987946 +0000 UTC m=+55.404045859" observedRunningTime="2024-12-13 14:20:08.919046198 +0000 UTC m=+55.742104111" watchObservedRunningTime="2024-12-13 14:20:08.933926652 +0000 UTC m=+55.756984575" Dec 13 14:20:08.946000 audit[4627]: NETFILTER_CFG table=filter:117 family=2 entries=10 op=nft_register_rule pid=4627 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:20:08.946000 audit[4627]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffe65d760b0 a2=0 a3=7ffe65d7609c items=0 ppid=2379 pid=4627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:08.946000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:20:08.952000 audit[4627]: NETFILTER_CFG table=nat:118 family=2 entries=20 op=nft_register_rule pid=4627 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:20:08.952000 audit[4627]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe65d760b0 a2=0 a3=7ffe65d7609c items=0 ppid=2379 pid=4627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:08.952000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:20:09.783879 kubelet[2218]: I1213 14:20:09.783836 2218 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 14:20:09.784423 kubelet[2218]: I1213 14:20:09.783836 2218 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 14:20:11.460207 kernel: kauditd_printk_skb: 31 callbacks suppressed Dec 13 14:20:11.460452 kernel: audit: type=1130 audit(1734099611.450:473): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.24:22-10.0.0.1:37318 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:11.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.24:22-10.0.0.1:37318 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:11.451730 systemd[1]: Started sshd@14-10.0.0.24:22-10.0.0.1:37318.service. Dec 13 14:20:11.505470 kernel: audit: type=1101 audit(1734099611.493:474): pid=4634 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:11.505663 kernel: audit: type=1103 audit(1734099611.499:475): pid=4634 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:11.493000 audit[4634]: USER_ACCT pid=4634 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:11.499000 audit[4634]: CRED_ACQ pid=4634 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:11.501118 sshd[4634]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:20:11.506429 sshd[4634]: Accepted publickey for core from 10.0.0.1 port 37318 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg Dec 13 14:20:11.499000 audit[4634]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd9c67a600 a2=3 a3=0 items=0 ppid=1 pid=4634 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:11.511190 systemd[1]: Started session-15.scope. Dec 13 14:20:11.511574 systemd-logind[1295]: New session 15 of user core. Dec 13 14:20:11.515487 kernel: audit: type=1006 audit(1734099611.499:476): pid=4634 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Dec 13 14:20:11.515619 kernel: audit: type=1300 audit(1734099611.499:476): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd9c67a600 a2=3 a3=0 items=0 ppid=1 pid=4634 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:11.515662 kernel: audit: type=1327 audit(1734099611.499:476): proctitle=737368643A20636F7265205B707269765D Dec 13 14:20:11.499000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:20:11.517000 audit[4634]: USER_START pid=4634 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:11.524099 kernel: audit: type=1105 audit(1734099611.517:477): pid=4634 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:11.523000 audit[4637]: CRED_ACQ pid=4637 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:11.529811 kernel: audit: type=1103 audit(1734099611.523:478): pid=4637 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:11.641216 sshd[4634]: pam_unix(sshd:session): session closed for user core Dec 13 14:20:11.641000 audit[4634]: USER_END pid=4634 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:11.649111 kernel: audit: type=1106 audit(1734099611.641:479): pid=4634 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:11.644452 systemd[1]: sshd@14-10.0.0.24:22-10.0.0.1:37318.service: Deactivated successfully. Dec 13 14:20:11.645701 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 14:20:11.646248 systemd-logind[1295]: Session 15 logged out. Waiting for processes to exit. Dec 13 14:20:11.647282 systemd-logind[1295]: Removed session 15. Dec 13 14:20:11.641000 audit[4634]: CRED_DISP pid=4634 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:11.643000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.24:22-10.0.0.1:37318 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:11.655045 kernel: audit: type=1104 audit(1734099611.641:480): pid=4634 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:11.943329 env[1311]: time="2024-12-13T14:20:11.943242169Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:11.948598 env[1311]: time="2024-12-13T14:20:11.948553864Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:11.950918 env[1311]: time="2024-12-13T14:20:11.950858275Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:11.952678 env[1311]: time="2024-12-13T14:20:11.952633429Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:11.953222 env[1311]: time="2024-12-13T14:20:11.953186844Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Dec 13 14:20:11.954143 env[1311]: time="2024-12-13T14:20:11.954111462Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 14:20:11.961787 env[1311]: time="2024-12-13T14:20:11.961724933Z" level=info msg="CreateContainer within sandbox \"53bec54d8043be6637ad3d284eb0d8bc0afd9fd69c3294bda54a43347cbb8011\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 14:20:11.979674 env[1311]: time="2024-12-13T14:20:11.979593155Z" level=info msg="CreateContainer within sandbox \"53bec54d8043be6637ad3d284eb0d8bc0afd9fd69c3294bda54a43347cbb8011\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b7a5140ff0bc86728e6d6fab507a228730ad44d5021ec36eb15476ce38e786a8\"" Dec 13 14:20:11.980927 env[1311]: time="2024-12-13T14:20:11.980888616Z" level=info msg="StartContainer for \"b7a5140ff0bc86728e6d6fab507a228730ad44d5021ec36eb15476ce38e786a8\"" Dec 13 14:20:12.141331 env[1311]: time="2024-12-13T14:20:12.141254737Z" level=info msg="StartContainer for \"b7a5140ff0bc86728e6d6fab507a228730ad44d5021ec36eb15476ce38e786a8\" returns successfully" Dec 13 14:20:12.856848 kubelet[2218]: I1213 14:20:12.856794 2218 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-67dd9c689b-nvrx4" podStartSLOduration=33.663542864 podStartE2EDuration="38.856750995s" podCreationTimestamp="2024-12-13 14:19:34 +0000 UTC" firstStartedPulling="2024-12-13 14:20:02.806165681 +0000 UTC m=+49.629223594" lastFinishedPulling="2024-12-13 14:20:07.999373822 +0000 UTC m=+54.822431725" observedRunningTime="2024-12-13 14:20:08.93467177 +0000 UTC m=+55.757729683" watchObservedRunningTime="2024-12-13 14:20:12.856750995 +0000 UTC m=+59.679808908" Dec 13 14:20:12.857337 kubelet[2218]: I1213 14:20:12.856907 2218 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-77d57bc794-t48v6" podStartSLOduration=31.473542396 podStartE2EDuration="38.856891525s" podCreationTimestamp="2024-12-13 14:19:34 +0000 UTC" firstStartedPulling="2024-12-13 14:20:04.570159022 +0000 UTC m=+51.393216935" lastFinishedPulling="2024-12-13 14:20:11.953508141 +0000 UTC m=+58.776566064" observedRunningTime="2024-12-13 14:20:12.85643638 +0000 UTC m=+59.679494293" watchObservedRunningTime="2024-12-13 14:20:12.856891525 +0000 UTC m=+59.679949438" Dec 13 14:20:13.169695 kubelet[2218]: I1213 14:20:13.169552 2218 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 14:20:13.254000 audit[4707]: NETFILTER_CFG table=filter:119 family=2 entries=9 op=nft_register_rule pid=4707 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:20:13.254000 audit[4707]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7fff67253950 a2=0 a3=7fff6725393c items=0 ppid=2379 pid=4707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:13.254000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:20:13.261000 audit[4707]: NETFILTER_CFG table=nat:120 family=2 entries=27 op=nft_register_chain pid=4707 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:20:13.261000 audit[4707]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7fff67253950 a2=0 a3=7fff6725393c items=0 ppid=2379 pid=4707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:13.261000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:20:13.268820 env[1311]: time="2024-12-13T14:20:13.268768881Z" level=info msg="StopPodSandbox for \"1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9\"" Dec 13 14:20:13.380991 env[1311]: 2024-12-13 14:20:13.338 [WARNING][4723] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--77d57bc794--t48v6-eth0", GenerateName:"calico-kube-controllers-77d57bc794-", Namespace:"calico-system", SelfLink:"", UID:"1706fd1e-e014-4076-a36c-7345b2980a9c", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 19, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77d57bc794", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"53bec54d8043be6637ad3d284eb0d8bc0afd9fd69c3294bda54a43347cbb8011", Pod:"calico-kube-controllers-77d57bc794-t48v6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia4436d45805", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:20:13.380991 env[1311]: 2024-12-13 14:20:13.338 [INFO][4723] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9" Dec 13 14:20:13.380991 env[1311]: 2024-12-13 14:20:13.339 [INFO][4723] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9" iface="eth0" netns="" Dec 13 14:20:13.380991 env[1311]: 2024-12-13 14:20:13.339 [INFO][4723] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9" Dec 13 14:20:13.380991 env[1311]: 2024-12-13 14:20:13.339 [INFO][4723] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9" Dec 13 14:20:13.380991 env[1311]: 2024-12-13 14:20:13.366 [INFO][4731] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9" HandleID="k8s-pod-network.1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9" Workload="localhost-k8s-calico--kube--controllers--77d57bc794--t48v6-eth0" Dec 13 14:20:13.380991 env[1311]: 2024-12-13 14:20:13.366 [INFO][4731] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:20:13.380991 env[1311]: 2024-12-13 14:20:13.366 [INFO][4731] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:20:13.380991 env[1311]: 2024-12-13 14:20:13.374 [WARNING][4731] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9" HandleID="k8s-pod-network.1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9" Workload="localhost-k8s-calico--kube--controllers--77d57bc794--t48v6-eth0" Dec 13 14:20:13.380991 env[1311]: 2024-12-13 14:20:13.375 [INFO][4731] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9" HandleID="k8s-pod-network.1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9" Workload="localhost-k8s-calico--kube--controllers--77d57bc794--t48v6-eth0" Dec 13 14:20:13.380991 env[1311]: 2024-12-13 14:20:13.377 [INFO][4731] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:20:13.380991 env[1311]: 2024-12-13 14:20:13.379 [INFO][4723] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9" Dec 13 14:20:13.381649 env[1311]: time="2024-12-13T14:20:13.381060354Z" level=info msg="TearDown network for sandbox \"1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9\" successfully" Dec 13 14:20:13.381649 env[1311]: time="2024-12-13T14:20:13.381097435Z" level=info msg="StopPodSandbox for \"1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9\" returns successfully" Dec 13 14:20:13.381774 env[1311]: time="2024-12-13T14:20:13.381732574Z" level=info msg="RemovePodSandbox for \"1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9\"" Dec 13 14:20:13.381854 env[1311]: time="2024-12-13T14:20:13.381782641Z" level=info msg="Forcibly stopping sandbox \"1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9\"" Dec 13 14:20:13.455916 env[1311]: 2024-12-13 14:20:13.421 [WARNING][4753] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--77d57bc794--t48v6-eth0", GenerateName:"calico-kube-controllers-77d57bc794-", Namespace:"calico-system", SelfLink:"", UID:"1706fd1e-e014-4076-a36c-7345b2980a9c", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 19, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"77d57bc794", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"53bec54d8043be6637ad3d284eb0d8bc0afd9fd69c3294bda54a43347cbb8011", Pod:"calico-kube-controllers-77d57bc794-t48v6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia4436d45805", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:20:13.455916 env[1311]: 2024-12-13 14:20:13.421 [INFO][4753] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9" Dec 13 14:20:13.455916 env[1311]: 2024-12-13 14:20:13.421 [INFO][4753] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9" iface="eth0" netns="" Dec 13 14:20:13.455916 env[1311]: 2024-12-13 14:20:13.421 [INFO][4753] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9" Dec 13 14:20:13.455916 env[1311]: 2024-12-13 14:20:13.421 [INFO][4753] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9" Dec 13 14:20:13.455916 env[1311]: 2024-12-13 14:20:13.442 [INFO][4760] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9" HandleID="k8s-pod-network.1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9" Workload="localhost-k8s-calico--kube--controllers--77d57bc794--t48v6-eth0" Dec 13 14:20:13.455916 env[1311]: 2024-12-13 14:20:13.443 [INFO][4760] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:20:13.455916 env[1311]: 2024-12-13 14:20:13.443 [INFO][4760] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:20:13.455916 env[1311]: 2024-12-13 14:20:13.450 [WARNING][4760] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9" HandleID="k8s-pod-network.1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9" Workload="localhost-k8s-calico--kube--controllers--77d57bc794--t48v6-eth0" Dec 13 14:20:13.455916 env[1311]: 2024-12-13 14:20:13.450 [INFO][4760] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9" HandleID="k8s-pod-network.1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9" Workload="localhost-k8s-calico--kube--controllers--77d57bc794--t48v6-eth0" Dec 13 14:20:13.455916 env[1311]: 2024-12-13 14:20:13.452 [INFO][4760] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:20:13.455916 env[1311]: 2024-12-13 14:20:13.453 [INFO][4753] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9" Dec 13 14:20:13.455916 env[1311]: time="2024-12-13T14:20:13.455876566Z" level=info msg="TearDown network for sandbox \"1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9\" successfully" Dec 13 14:20:13.462034 env[1311]: time="2024-12-13T14:20:13.461930161Z" level=info msg="RemovePodSandbox \"1dd2df959a045b060d04fc748ff8b096a521ec93a73ec622d1cacdf9d3f8c7f9\" returns successfully" Dec 13 14:20:13.462788 env[1311]: time="2024-12-13T14:20:13.462753003Z" level=info msg="StopPodSandbox for \"7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91\"" Dec 13 14:20:13.527846 env[1311]: 2024-12-13 14:20:13.496 [WARNING][4782] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--77dkv-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"727d322d-22ee-4e52-af18-8f9ebc3141c1", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 19, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6a774f300c2b6ffd365d56c4066fa64070176f6544f31088e430cfcdadc08df4", Pod:"coredns-76f75df574-77dkv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib107a93b52c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:20:13.527846 env[1311]: 2024-12-13 14:20:13.496 [INFO][4782] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91" Dec 13 14:20:13.527846 env[1311]: 2024-12-13 14:20:13.496 [INFO][4782] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91" iface="eth0" netns="" Dec 13 14:20:13.527846 env[1311]: 2024-12-13 14:20:13.496 [INFO][4782] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91" Dec 13 14:20:13.527846 env[1311]: 2024-12-13 14:20:13.496 [INFO][4782] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91" Dec 13 14:20:13.527846 env[1311]: 2024-12-13 14:20:13.519 [INFO][4789] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91" HandleID="k8s-pod-network.7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91" Workload="localhost-k8s-coredns--76f75df574--77dkv-eth0" Dec 13 14:20:13.527846 env[1311]: 2024-12-13 14:20:13.519 [INFO][4789] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:20:13.527846 env[1311]: 2024-12-13 14:20:13.519 [INFO][4789] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:20:13.527846 env[1311]: 2024-12-13 14:20:13.524 [WARNING][4789] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91" HandleID="k8s-pod-network.7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91" Workload="localhost-k8s-coredns--76f75df574--77dkv-eth0" Dec 13 14:20:13.527846 env[1311]: 2024-12-13 14:20:13.524 [INFO][4789] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91" HandleID="k8s-pod-network.7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91" Workload="localhost-k8s-coredns--76f75df574--77dkv-eth0" Dec 13 14:20:13.527846 env[1311]: 2024-12-13 14:20:13.525 [INFO][4789] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:20:13.527846 env[1311]: 2024-12-13 14:20:13.526 [INFO][4782] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91" Dec 13 14:20:13.528352 env[1311]: time="2024-12-13T14:20:13.527887311Z" level=info msg="TearDown network for sandbox \"7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91\" successfully" Dec 13 14:20:13.528352 env[1311]: time="2024-12-13T14:20:13.527924202Z" level=info msg="StopPodSandbox for \"7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91\" returns successfully" Dec 13 14:20:13.528465 env[1311]: time="2024-12-13T14:20:13.528427418Z" level=info msg="RemovePodSandbox for \"7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91\"" Dec 13 14:20:13.528521 env[1311]: time="2024-12-13T14:20:13.528460863Z" level=info msg="Forcibly stopping sandbox \"7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91\"" Dec 13 14:20:13.593528 env[1311]: 2024-12-13 14:20:13.559 [WARNING][4813] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--77dkv-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"727d322d-22ee-4e52-af18-8f9ebc3141c1", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 19, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6a774f300c2b6ffd365d56c4066fa64070176f6544f31088e430cfcdadc08df4", Pod:"coredns-76f75df574-77dkv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib107a93b52c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:20:13.593528 env[1311]: 2024-12-13 14:20:13.560 [INFO][4813] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91" Dec 13 14:20:13.593528 env[1311]: 2024-12-13 14:20:13.560 [INFO][4813] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91" iface="eth0" netns="" Dec 13 14:20:13.593528 env[1311]: 2024-12-13 14:20:13.560 [INFO][4813] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91" Dec 13 14:20:13.593528 env[1311]: 2024-12-13 14:20:13.560 [INFO][4813] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91" Dec 13 14:20:13.593528 env[1311]: 2024-12-13 14:20:13.582 [INFO][4821] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91" HandleID="k8s-pod-network.7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91" Workload="localhost-k8s-coredns--76f75df574--77dkv-eth0" Dec 13 14:20:13.593528 env[1311]: 2024-12-13 14:20:13.582 [INFO][4821] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:20:13.593528 env[1311]: 2024-12-13 14:20:13.582 [INFO][4821] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:20:13.593528 env[1311]: 2024-12-13 14:20:13.588 [WARNING][4821] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91" HandleID="k8s-pod-network.7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91" Workload="localhost-k8s-coredns--76f75df574--77dkv-eth0" Dec 13 14:20:13.593528 env[1311]: 2024-12-13 14:20:13.588 [INFO][4821] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91" HandleID="k8s-pod-network.7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91" Workload="localhost-k8s-coredns--76f75df574--77dkv-eth0" Dec 13 14:20:13.593528 env[1311]: 2024-12-13 14:20:13.590 [INFO][4821] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:20:13.593528 env[1311]: 2024-12-13 14:20:13.591 [INFO][4813] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91" Dec 13 14:20:13.594066 env[1311]: time="2024-12-13T14:20:13.593545306Z" level=info msg="TearDown network for sandbox \"7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91\" successfully" Dec 13 14:20:13.597432 env[1311]: time="2024-12-13T14:20:13.597392164Z" level=info msg="RemovePodSandbox \"7ae3c896a9fae2bd7d601cf8c2536ec651b6c153e49ffd437e2d9ce96e31ba91\" returns successfully" Dec 13 14:20:13.598176 env[1311]: time="2024-12-13T14:20:13.598132054Z" level=info msg="StopPodSandbox for \"8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e\"" Dec 13 14:20:13.710717 env[1311]: 2024-12-13 14:20:13.678 [WARNING][4843] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67dd9c689b--4dc82-eth0", GenerateName:"calico-apiserver-67dd9c689b-", Namespace:"calico-apiserver", SelfLink:"", UID:"3cc34ed8-34cd-4a26-9a00-bd15703328eb", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 19, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67dd9c689b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2f1c69fd6c3022ecab959cd7fcfdc7195f90f0d55961886352f84b89ccbda8bf", Pod:"calico-apiserver-67dd9c689b-4dc82", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib9bd5f6f228", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:20:13.710717 env[1311]: 2024-12-13 14:20:13.679 [INFO][4843] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e" Dec 13 14:20:13.710717 env[1311]: 2024-12-13 14:20:13.679 [INFO][4843] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e" iface="eth0" netns="" Dec 13 14:20:13.710717 env[1311]: 2024-12-13 14:20:13.679 [INFO][4843] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e" Dec 13 14:20:13.710717 env[1311]: 2024-12-13 14:20:13.679 [INFO][4843] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e" Dec 13 14:20:13.710717 env[1311]: 2024-12-13 14:20:13.700 [INFO][4850] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e" HandleID="k8s-pod-network.8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e" Workload="localhost-k8s-calico--apiserver--67dd9c689b--4dc82-eth0" Dec 13 14:20:13.710717 env[1311]: 2024-12-13 14:20:13.700 [INFO][4850] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:20:13.710717 env[1311]: 2024-12-13 14:20:13.700 [INFO][4850] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:20:13.710717 env[1311]: 2024-12-13 14:20:13.706 [WARNING][4850] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e" HandleID="k8s-pod-network.8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e" Workload="localhost-k8s-calico--apiserver--67dd9c689b--4dc82-eth0" Dec 13 14:20:13.710717 env[1311]: 2024-12-13 14:20:13.706 [INFO][4850] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e" HandleID="k8s-pod-network.8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e" Workload="localhost-k8s-calico--apiserver--67dd9c689b--4dc82-eth0" Dec 13 14:20:13.710717 env[1311]: 2024-12-13 14:20:13.707 [INFO][4850] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:20:13.710717 env[1311]: 2024-12-13 14:20:13.709 [INFO][4843] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e" Dec 13 14:20:13.710717 env[1311]: time="2024-12-13T14:20:13.710669269Z" level=info msg="TearDown network for sandbox \"8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e\" successfully" Dec 13 14:20:13.710717 env[1311]: time="2024-12-13T14:20:13.710712743Z" level=info msg="StopPodSandbox for \"8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e\" returns successfully" Dec 13 14:20:13.712307 env[1311]: time="2024-12-13T14:20:13.712275504Z" level=info msg="RemovePodSandbox for \"8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e\"" Dec 13 14:20:13.712493 env[1311]: time="2024-12-13T14:20:13.712420923Z" level=info msg="Forcibly stopping sandbox \"8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e\"" Dec 13 14:20:13.803125 env[1311]: 2024-12-13 14:20:13.752 [WARNING][4874] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67dd9c689b--4dc82-eth0", GenerateName:"calico-apiserver-67dd9c689b-", Namespace:"calico-apiserver", SelfLink:"", UID:"3cc34ed8-34cd-4a26-9a00-bd15703328eb", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 19, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67dd9c689b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2f1c69fd6c3022ecab959cd7fcfdc7195f90f0d55961886352f84b89ccbda8bf", Pod:"calico-apiserver-67dd9c689b-4dc82", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib9bd5f6f228", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:20:13.803125 env[1311]: 2024-12-13 14:20:13.752 [INFO][4874] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e" Dec 13 14:20:13.803125 env[1311]: 2024-12-13 14:20:13.752 [INFO][4874] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e" iface="eth0" netns="" Dec 13 14:20:13.803125 env[1311]: 2024-12-13 14:20:13.752 [INFO][4874] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e" Dec 13 14:20:13.803125 env[1311]: 2024-12-13 14:20:13.752 [INFO][4874] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e" Dec 13 14:20:13.803125 env[1311]: 2024-12-13 14:20:13.791 [INFO][4881] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e" HandleID="k8s-pod-network.8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e" Workload="localhost-k8s-calico--apiserver--67dd9c689b--4dc82-eth0" Dec 13 14:20:13.803125 env[1311]: 2024-12-13 14:20:13.792 [INFO][4881] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:20:13.803125 env[1311]: 2024-12-13 14:20:13.792 [INFO][4881] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:20:13.803125 env[1311]: 2024-12-13 14:20:13.797 [WARNING][4881] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e" HandleID="k8s-pod-network.8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e" Workload="localhost-k8s-calico--apiserver--67dd9c689b--4dc82-eth0" Dec 13 14:20:13.803125 env[1311]: 2024-12-13 14:20:13.797 [INFO][4881] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e" HandleID="k8s-pod-network.8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e" Workload="localhost-k8s-calico--apiserver--67dd9c689b--4dc82-eth0" Dec 13 14:20:13.803125 env[1311]: 2024-12-13 14:20:13.799 [INFO][4881] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:20:13.803125 env[1311]: 2024-12-13 14:20:13.801 [INFO][4874] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e" Dec 13 14:20:13.803906 env[1311]: time="2024-12-13T14:20:13.803157481Z" level=info msg="TearDown network for sandbox \"8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e\" successfully" Dec 13 14:20:13.900193 env[1311]: time="2024-12-13T14:20:13.900137218Z" level=info msg="RemovePodSandbox \"8719c3dc7e16ebacde4d9ce3ea3769a50b44ad68145218c0928645d057b6ea8e\" returns successfully" Dec 13 14:20:13.900838 env[1311]: time="2024-12-13T14:20:13.900799901Z" level=info msg="StopPodSandbox for \"c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a\"" Dec 13 14:20:13.961125 env[1311]: 2024-12-13 14:20:13.934 [WARNING][4903] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--255rl-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"0a2e947b-99c4-4ed5-a3f2-f248230d88f2", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 19, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"07ec8daea893216dd885eedff13345b85ba4bf1baa012bc2140207312fd00752", Pod:"coredns-76f75df574-255rl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali976d2d0123a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:20:13.961125 env[1311]: 2024-12-13 14:20:13.934 [INFO][4903] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a" Dec 13 14:20:13.961125 env[1311]: 2024-12-13 14:20:13.934 [INFO][4903] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a" iface="eth0" netns="" Dec 13 14:20:13.961125 env[1311]: 2024-12-13 14:20:13.934 [INFO][4903] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a" Dec 13 14:20:13.961125 env[1311]: 2024-12-13 14:20:13.934 [INFO][4903] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a" Dec 13 14:20:13.961125 env[1311]: 2024-12-13 14:20:13.950 [INFO][4911] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a" HandleID="k8s-pod-network.c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a" Workload="localhost-k8s-coredns--76f75df574--255rl-eth0" Dec 13 14:20:13.961125 env[1311]: 2024-12-13 14:20:13.950 [INFO][4911] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:20:13.961125 env[1311]: 2024-12-13 14:20:13.950 [INFO][4911] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:20:13.961125 env[1311]: 2024-12-13 14:20:13.957 [WARNING][4911] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a" HandleID="k8s-pod-network.c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a" Workload="localhost-k8s-coredns--76f75df574--255rl-eth0" Dec 13 14:20:13.961125 env[1311]: 2024-12-13 14:20:13.957 [INFO][4911] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a" HandleID="k8s-pod-network.c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a" Workload="localhost-k8s-coredns--76f75df574--255rl-eth0" Dec 13 14:20:13.961125 env[1311]: 2024-12-13 14:20:13.958 [INFO][4911] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:20:13.961125 env[1311]: 2024-12-13 14:20:13.959 [INFO][4903] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a" Dec 13 14:20:13.961125 env[1311]: time="2024-12-13T14:20:13.961086409Z" level=info msg="TearDown network for sandbox \"c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a\" successfully" Dec 13 14:20:13.961612 env[1311]: time="2024-12-13T14:20:13.961131636Z" level=info msg="StopPodSandbox for \"c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a\" returns successfully" Dec 13 14:20:13.961859 env[1311]: time="2024-12-13T14:20:13.961820539Z" level=info msg="RemovePodSandbox for \"c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a\"" Dec 13 14:20:13.962031 env[1311]: time="2024-12-13T14:20:13.961964596Z" level=info msg="Forcibly stopping sandbox \"c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a\"" Dec 13 14:20:14.021808 env[1311]: 2024-12-13 14:20:13.993 [WARNING][4934] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--255rl-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"0a2e947b-99c4-4ed5-a3f2-f248230d88f2", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 19, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"07ec8daea893216dd885eedff13345b85ba4bf1baa012bc2140207312fd00752", Pod:"coredns-76f75df574-255rl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali976d2d0123a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:20:14.021808 env[1311]: 2024-12-13 14:20:13.993 [INFO][4934] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a" Dec 13 14:20:14.021808 env[1311]: 2024-12-13 14:20:13.993 [INFO][4934] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a" iface="eth0" netns="" Dec 13 14:20:14.021808 env[1311]: 2024-12-13 14:20:13.993 [INFO][4934] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a" Dec 13 14:20:14.021808 env[1311]: 2024-12-13 14:20:13.993 [INFO][4934] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a" Dec 13 14:20:14.021808 env[1311]: 2024-12-13 14:20:14.012 [INFO][4941] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a" HandleID="k8s-pod-network.c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a" Workload="localhost-k8s-coredns--76f75df574--255rl-eth0" Dec 13 14:20:14.021808 env[1311]: 2024-12-13 14:20:14.012 [INFO][4941] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:20:14.021808 env[1311]: 2024-12-13 14:20:14.012 [INFO][4941] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:20:14.021808 env[1311]: 2024-12-13 14:20:14.017 [WARNING][4941] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a" HandleID="k8s-pod-network.c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a" Workload="localhost-k8s-coredns--76f75df574--255rl-eth0" Dec 13 14:20:14.021808 env[1311]: 2024-12-13 14:20:14.017 [INFO][4941] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a" HandleID="k8s-pod-network.c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a" Workload="localhost-k8s-coredns--76f75df574--255rl-eth0" Dec 13 14:20:14.021808 env[1311]: 2024-12-13 14:20:14.019 [INFO][4941] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:20:14.021808 env[1311]: 2024-12-13 14:20:14.020 [INFO][4934] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a" Dec 13 14:20:14.022452 env[1311]: time="2024-12-13T14:20:14.022399102Z" level=info msg="TearDown network for sandbox \"c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a\" successfully" Dec 13 14:20:14.028173 env[1311]: time="2024-12-13T14:20:14.028129130Z" level=info msg="RemovePodSandbox \"c0ef34684f08a5b6a10a8fba8d2449090bc8bc210369e793518392989234116a\" returns successfully" Dec 13 14:20:14.028679 env[1311]: time="2024-12-13T14:20:14.028645852Z" level=info msg="StopPodSandbox for \"8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3\"" Dec 13 14:20:14.096275 env[1311]: 2024-12-13 14:20:14.063 [WARNING][4964] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67dd9c689b--nvrx4-eth0", GenerateName:"calico-apiserver-67dd9c689b-", Namespace:"calico-apiserver", SelfLink:"", UID:"237a3d9e-0254-4057-bf82-74077775e376", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 19, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67dd9c689b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ebbb2bf361d79d220cb412bfedae470e8bcec2f90873fe7f7cf03bd47ebfb52d", Pod:"calico-apiserver-67dd9c689b-nvrx4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0f0c75667ae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:20:14.096275 env[1311]: 2024-12-13 14:20:14.063 [INFO][4964] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3" Dec 13 14:20:14.096275 env[1311]: 2024-12-13 14:20:14.063 [INFO][4964] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3" iface="eth0" netns="" Dec 13 14:20:14.096275 env[1311]: 2024-12-13 14:20:14.064 [INFO][4964] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3" Dec 13 14:20:14.096275 env[1311]: 2024-12-13 14:20:14.064 [INFO][4964] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3" Dec 13 14:20:14.096275 env[1311]: 2024-12-13 14:20:14.083 [INFO][4971] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3" HandleID="k8s-pod-network.8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3" Workload="localhost-k8s-calico--apiserver--67dd9c689b--nvrx4-eth0" Dec 13 14:20:14.096275 env[1311]: 2024-12-13 14:20:14.084 [INFO][4971] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:20:14.096275 env[1311]: 2024-12-13 14:20:14.084 [INFO][4971] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:20:14.096275 env[1311]: 2024-12-13 14:20:14.090 [WARNING][4971] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3" HandleID="k8s-pod-network.8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3" Workload="localhost-k8s-calico--apiserver--67dd9c689b--nvrx4-eth0" Dec 13 14:20:14.096275 env[1311]: 2024-12-13 14:20:14.090 [INFO][4971] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3" HandleID="k8s-pod-network.8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3" Workload="localhost-k8s-calico--apiserver--67dd9c689b--nvrx4-eth0" Dec 13 14:20:14.096275 env[1311]: 2024-12-13 14:20:14.092 [INFO][4971] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:20:14.096275 env[1311]: 2024-12-13 14:20:14.094 [INFO][4964] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3" Dec 13 14:20:14.096803 env[1311]: time="2024-12-13T14:20:14.096311808Z" level=info msg="TearDown network for sandbox \"8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3\" successfully" Dec 13 14:20:14.096803 env[1311]: time="2024-12-13T14:20:14.096349410Z" level=info msg="StopPodSandbox for \"8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3\" returns successfully" Dec 13 14:20:14.096895 env[1311]: time="2024-12-13T14:20:14.096869829Z" level=info msg="RemovePodSandbox for \"8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3\"" Dec 13 14:20:14.096939 env[1311]: time="2024-12-13T14:20:14.096897723Z" level=info msg="Forcibly stopping sandbox \"8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3\"" Dec 13 14:20:14.177680 env[1311]: 2024-12-13 14:20:14.134 [WARNING][4993] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67dd9c689b--nvrx4-eth0", GenerateName:"calico-apiserver-67dd9c689b-", Namespace:"calico-apiserver", SelfLink:"", UID:"237a3d9e-0254-4057-bf82-74077775e376", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 19, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67dd9c689b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ebbb2bf361d79d220cb412bfedae470e8bcec2f90873fe7f7cf03bd47ebfb52d", Pod:"calico-apiserver-67dd9c689b-nvrx4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0f0c75667ae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:20:14.177680 env[1311]: 2024-12-13 14:20:14.134 [INFO][4993] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3" Dec 13 14:20:14.177680 env[1311]: 2024-12-13 14:20:14.134 [INFO][4993] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3" iface="eth0" netns="" Dec 13 14:20:14.177680 env[1311]: 2024-12-13 14:20:14.134 [INFO][4993] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3" Dec 13 14:20:14.177680 env[1311]: 2024-12-13 14:20:14.134 [INFO][4993] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3" Dec 13 14:20:14.177680 env[1311]: 2024-12-13 14:20:14.166 [INFO][5001] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3" HandleID="k8s-pod-network.8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3" Workload="localhost-k8s-calico--apiserver--67dd9c689b--nvrx4-eth0" Dec 13 14:20:14.177680 env[1311]: 2024-12-13 14:20:14.166 [INFO][5001] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:20:14.177680 env[1311]: 2024-12-13 14:20:14.166 [INFO][5001] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:20:14.177680 env[1311]: 2024-12-13 14:20:14.172 [WARNING][5001] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3" HandleID="k8s-pod-network.8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3" Workload="localhost-k8s-calico--apiserver--67dd9c689b--nvrx4-eth0" Dec 13 14:20:14.177680 env[1311]: 2024-12-13 14:20:14.172 [INFO][5001] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3" HandleID="k8s-pod-network.8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3" Workload="localhost-k8s-calico--apiserver--67dd9c689b--nvrx4-eth0" Dec 13 14:20:14.177680 env[1311]: 2024-12-13 14:20:14.174 [INFO][5001] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:20:14.177680 env[1311]: 2024-12-13 14:20:14.176 [INFO][4993] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3" Dec 13 14:20:14.178391 env[1311]: time="2024-12-13T14:20:14.178319079Z" level=info msg="TearDown network for sandbox \"8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3\" successfully" Dec 13 14:20:14.182595 env[1311]: time="2024-12-13T14:20:14.182550640Z" level=info msg="RemovePodSandbox \"8a3262eb4ab1c807e526558f81835beff4ceb93ea738e1a92e1e537d24daaae3\" returns successfully" Dec 13 14:20:14.183362 env[1311]: time="2024-12-13T14:20:14.183312864Z" level=info msg="StopPodSandbox for \"be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8\"" Dec 13 14:20:14.272695 env[1311]: 2024-12-13 14:20:14.221 [WARNING][5023] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--g44h2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b485a5b2-c009-42f0-8598-051c15f90fca", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 19, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f8ee8a4896d289df1227268254b296be28671a78f85f2cae3348a224a16641ce", Pod:"csi-node-driver-g44h2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali37b8ed04a9e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:20:14.272695 env[1311]: 2024-12-13 14:20:14.222 [INFO][5023] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8" Dec 13 14:20:14.272695 env[1311]: 2024-12-13 14:20:14.222 [INFO][5023] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8" iface="eth0" netns="" Dec 13 14:20:14.272695 env[1311]: 2024-12-13 14:20:14.222 [INFO][5023] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8" Dec 13 14:20:14.272695 env[1311]: 2024-12-13 14:20:14.222 [INFO][5023] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8" Dec 13 14:20:14.272695 env[1311]: 2024-12-13 14:20:14.261 [INFO][5030] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8" HandleID="k8s-pod-network.be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8" Workload="localhost-k8s-csi--node--driver--g44h2-eth0" Dec 13 14:20:14.272695 env[1311]: 2024-12-13 14:20:14.261 [INFO][5030] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:20:14.272695 env[1311]: 2024-12-13 14:20:14.261 [INFO][5030] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:20:14.272695 env[1311]: 2024-12-13 14:20:14.267 [WARNING][5030] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8" HandleID="k8s-pod-network.be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8" Workload="localhost-k8s-csi--node--driver--g44h2-eth0" Dec 13 14:20:14.272695 env[1311]: 2024-12-13 14:20:14.267 [INFO][5030] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8" HandleID="k8s-pod-network.be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8" Workload="localhost-k8s-csi--node--driver--g44h2-eth0" Dec 13 14:20:14.272695 env[1311]: 2024-12-13 14:20:14.269 [INFO][5030] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:20:14.272695 env[1311]: 2024-12-13 14:20:14.271 [INFO][5023] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8" Dec 13 14:20:14.277609 env[1311]: time="2024-12-13T14:20:14.272724336Z" level=info msg="TearDown network for sandbox \"be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8\" successfully" Dec 13 14:20:14.277609 env[1311]: time="2024-12-13T14:20:14.272761276Z" level=info msg="StopPodSandbox for \"be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8\" returns successfully" Dec 13 14:20:14.277609 env[1311]: time="2024-12-13T14:20:14.273345007Z" level=info msg="RemovePodSandbox for \"be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8\"" Dec 13 14:20:14.277609 env[1311]: time="2024-12-13T14:20:14.273373371Z" level=info msg="Forcibly stopping sandbox \"be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8\"" Dec 13 14:20:14.362980 env[1311]: 2024-12-13 14:20:14.317 [WARNING][5055] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--g44h2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b485a5b2-c009-42f0-8598-051c15f90fca", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 19, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f8ee8a4896d289df1227268254b296be28671a78f85f2cae3348a224a16641ce", Pod:"csi-node-driver-g44h2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali37b8ed04a9e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:20:14.362980 env[1311]: 2024-12-13 14:20:14.317 [INFO][5055] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8" Dec 13 14:20:14.362980 env[1311]: 2024-12-13 14:20:14.317 [INFO][5055] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8" iface="eth0" netns="" Dec 13 14:20:14.362980 env[1311]: 2024-12-13 14:20:14.317 [INFO][5055] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8" Dec 13 14:20:14.362980 env[1311]: 2024-12-13 14:20:14.317 [INFO][5055] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8" Dec 13 14:20:14.362980 env[1311]: 2024-12-13 14:20:14.349 [INFO][5063] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8" HandleID="k8s-pod-network.be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8" Workload="localhost-k8s-csi--node--driver--g44h2-eth0" Dec 13 14:20:14.362980 env[1311]: 2024-12-13 14:20:14.349 [INFO][5063] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:20:14.362980 env[1311]: 2024-12-13 14:20:14.349 [INFO][5063] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:20:14.362980 env[1311]: 2024-12-13 14:20:14.355 [WARNING][5063] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8" HandleID="k8s-pod-network.be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8" Workload="localhost-k8s-csi--node--driver--g44h2-eth0" Dec 13 14:20:14.362980 env[1311]: 2024-12-13 14:20:14.355 [INFO][5063] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8" HandleID="k8s-pod-network.be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8" Workload="localhost-k8s-csi--node--driver--g44h2-eth0" Dec 13 14:20:14.362980 env[1311]: 2024-12-13 14:20:14.358 [INFO][5063] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:20:14.362980 env[1311]: 2024-12-13 14:20:14.360 [INFO][5055] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8" Dec 13 14:20:14.363548 env[1311]: time="2024-12-13T14:20:14.363084117Z" level=info msg="TearDown network for sandbox \"be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8\" successfully" Dec 13 14:20:14.368415 env[1311]: time="2024-12-13T14:20:14.368345625Z" level=info msg="RemovePodSandbox \"be9547a7c898341e61f0d7c7ccef669767f633dd56430cc86506d501cf5382e8\" returns successfully" Dec 13 14:20:14.455970 env[1311]: time="2024-12-13T14:20:14.455896917Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:14.457991 env[1311]: time="2024-12-13T14:20:14.457935610Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:14.461684 env[1311]: time="2024-12-13T14:20:14.461649257Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:14.464298 env[1311]: time="2024-12-13T14:20:14.464241492Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:14.464955 env[1311]: time="2024-12-13T14:20:14.464889475Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Dec 13 14:20:14.467329 env[1311]: time="2024-12-13T14:20:14.467283681Z" level=info msg="CreateContainer within sandbox \"f8ee8a4896d289df1227268254b296be28671a78f85f2cae3348a224a16641ce\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 14:20:14.480236 env[1311]: time="2024-12-13T14:20:14.479859226Z" level=info msg="CreateContainer within sandbox \"f8ee8a4896d289df1227268254b296be28671a78f85f2cae3348a224a16641ce\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"a1ea0ba8c1eae95e595a91cc2267ba5aa1418ce1038506a37694d28ddae156ae\"" Dec 13 14:20:14.482268 env[1311]: time="2024-12-13T14:20:14.482231529Z" level=info msg="StartContainer for \"a1ea0ba8c1eae95e595a91cc2267ba5aa1418ce1038506a37694d28ddae156ae\"" Dec 13 14:20:14.536315 env[1311]: time="2024-12-13T14:20:14.536182073Z" level=info msg="StartContainer for \"a1ea0ba8c1eae95e595a91cc2267ba5aa1418ce1038506a37694d28ddae156ae\" returns successfully" Dec 13 14:20:14.537951 env[1311]: time="2024-12-13T14:20:14.537902966Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 14:20:16.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.24:22-10.0.0.1:37330 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:16.645141 systemd[1]: Started sshd@15-10.0.0.24:22-10.0.0.1:37330.service. Dec 13 14:20:16.646832 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 13 14:20:16.646904 kernel: audit: type=1130 audit(1734099616.644:484): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.24:22-10.0.0.1:37330 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:16.683000 audit[5105]: USER_ACCT pid=5105 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:16.684899 sshd[5105]: Accepted publickey for core from 10.0.0.1 port 37330 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg Dec 13 14:20:16.688000 audit[5105]: CRED_ACQ pid=5105 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:16.689901 sshd[5105]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:20:16.694460 kernel: audit: type=1101 audit(1734099616.683:485): pid=5105 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:16.694540 kernel: audit: type=1103 audit(1734099616.688:486): pid=5105 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:16.694777 systemd[1]: Started session-16.scope. Dec 13 14:20:16.695796 systemd-logind[1295]: New session 16 of user core. Dec 13 14:20:16.697500 kernel: audit: type=1006 audit(1734099616.688:487): pid=5105 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Dec 13 14:20:16.688000 audit[5105]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc6241a910 a2=3 a3=0 items=0 ppid=1 pid=5105 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:16.702541 kernel: audit: type=1300 audit(1734099616.688:487): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc6241a910 a2=3 a3=0 items=0 ppid=1 pid=5105 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:16.688000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:20:16.704404 kernel: audit: type=1327 audit(1734099616.688:487): proctitle=737368643A20636F7265205B707269765D Dec 13 14:20:16.699000 audit[5105]: USER_START pid=5105 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:16.709726 kernel: audit: type=1105 audit(1734099616.699:488): pid=5105 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:16.700000 audit[5110]: CRED_ACQ pid=5110 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:16.713885 kernel: audit: type=1103 audit(1734099616.700:489): pid=5110 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:16.830815 sshd[5105]: pam_unix(sshd:session): session closed for user core Dec 13 14:20:16.830000 audit[5105]: USER_END pid=5105 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:16.833141 systemd[1]: sshd@15-10.0.0.24:22-10.0.0.1:37330.service: Deactivated successfully. Dec 13 14:20:16.834077 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 14:20:16.830000 audit[5105]: CRED_DISP pid=5105 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:16.840485 systemd-logind[1295]: Session 16 logged out. Waiting for processes to exit. Dec 13 14:20:16.840595 kernel: audit: type=1106 audit(1734099616.830:490): pid=5105 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:16.840632 kernel: audit: type=1104 audit(1734099616.830:491): pid=5105 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:16.832000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.24:22-10.0.0.1:37330 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:16.841237 systemd-logind[1295]: Removed session 16. Dec 13 14:20:16.896787 env[1311]: time="2024-12-13T14:20:16.896623928Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:16.899003 env[1311]: time="2024-12-13T14:20:16.898951991Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:16.900759 env[1311]: time="2024-12-13T14:20:16.900714930Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:16.902346 env[1311]: time="2024-12-13T14:20:16.902297414Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:20:16.902909 env[1311]: time="2024-12-13T14:20:16.902845224Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Dec 13 14:20:16.905116 env[1311]: time="2024-12-13T14:20:16.905070108Z" level=info msg="CreateContainer within sandbox \"f8ee8a4896d289df1227268254b296be28671a78f85f2cae3348a224a16641ce\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 14:20:16.920924 env[1311]: time="2024-12-13T14:20:16.920816612Z" level=info msg="CreateContainer within sandbox \"f8ee8a4896d289df1227268254b296be28671a78f85f2cae3348a224a16641ce\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"4eb197c95afa9a7d514269c49a06bc6b4c947250d0572e7423a76e3faad00df1\"" Dec 13 14:20:16.921749 env[1311]: time="2024-12-13T14:20:16.921722439Z" level=info msg="StartContainer for \"4eb197c95afa9a7d514269c49a06bc6b4c947250d0572e7423a76e3faad00df1\"" Dec 13 14:20:17.000987 env[1311]: time="2024-12-13T14:20:17.000910853Z" level=info msg="StartContainer for \"4eb197c95afa9a7d514269c49a06bc6b4c947250d0572e7423a76e3faad00df1\" returns successfully" Dec 13 14:20:17.755810 kubelet[2218]: I1213 14:20:17.755725 2218 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 14:20:17.758219 kubelet[2218]: I1213 14:20:17.758193 2218 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 14:20:18.660885 systemd[1]: run-containerd-runc-k8s.io-b7a5140ff0bc86728e6d6fab507a228730ad44d5021ec36eb15476ce38e786a8-runc.bgsMZa.mount: Deactivated successfully. Dec 13 14:20:20.398143 kubelet[2218]: E1213 14:20:20.398090 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:20.412189 kubelet[2218]: I1213 14:20:20.412153 2218 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-g44h2" podStartSLOduration=36.252056533 podStartE2EDuration="46.412110057s" podCreationTimestamp="2024-12-13 14:19:34 +0000 UTC" firstStartedPulling="2024-12-13 14:20:06.743162781 +0000 UTC m=+53.566220684" lastFinishedPulling="2024-12-13 14:20:16.903216295 +0000 UTC m=+63.726274208" observedRunningTime="2024-12-13 14:20:17.895392662 +0000 UTC m=+64.718450575" watchObservedRunningTime="2024-12-13 14:20:20.412110057 +0000 UTC m=+67.235167970" Dec 13 14:20:21.833140 systemd[1]: Started sshd@16-10.0.0.24:22-10.0.0.1:56298.service. Dec 13 14:20:21.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.24:22-10.0.0.1:56298 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:21.834387 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 14:20:21.834450 kernel: audit: type=1130 audit(1734099621.832:493): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.24:22-10.0.0.1:56298 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:21.871000 audit[5205]: USER_ACCT pid=5205 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:21.873340 sshd[5205]: Accepted publickey for core from 10.0.0.1 port 56298 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg Dec 13 14:20:21.876054 sshd[5205]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:20:21.871000 audit[5205]: CRED_ACQ pid=5205 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:21.881874 systemd-logind[1295]: New session 17 of user core. Dec 13 14:20:21.882994 kernel: audit: type=1101 audit(1734099621.871:494): pid=5205 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:21.883066 kernel: audit: type=1103 audit(1734099621.871:495): pid=5205 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:21.883102 kernel: audit: type=1006 audit(1734099621.871:496): pid=5205 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Dec 13 14:20:21.883139 systemd[1]: Started session-17.scope. Dec 13 14:20:21.871000 audit[5205]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd946567d0 a2=3 a3=0 items=0 ppid=1 pid=5205 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:21.890244 kernel: audit: type=1300 audit(1734099621.871:496): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd946567d0 a2=3 a3=0 items=0 ppid=1 pid=5205 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:21.890326 kernel: audit: type=1327 audit(1734099621.871:496): proctitle=737368643A20636F7265205B707269765D Dec 13 14:20:21.871000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:20:21.884000 audit[5205]: USER_START pid=5205 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:21.896252 kernel: audit: type=1105 audit(1734099621.884:497): pid=5205 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:21.896317 kernel: audit: type=1103 audit(1734099621.889:498): pid=5208 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:21.889000 audit[5208]: CRED_ACQ pid=5208 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:22.017788 sshd[5205]: pam_unix(sshd:session): session closed for user core Dec 13 14:20:22.018000 audit[5205]: USER_END pid=5205 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:22.020589 systemd[1]: sshd@16-10.0.0.24:22-10.0.0.1:56298.service: Deactivated successfully. Dec 13 14:20:22.021625 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 14:20:22.022735 systemd-logind[1295]: Session 17 logged out. Waiting for processes to exit. Dec 13 14:20:22.023705 systemd-logind[1295]: Removed session 17. Dec 13 14:20:22.018000 audit[5205]: CRED_DISP pid=5205 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:22.028301 kernel: audit: type=1106 audit(1734099622.018:499): pid=5205 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:22.028364 kernel: audit: type=1104 audit(1734099622.018:500): pid=5205 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:22.019000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.24:22-10.0.0.1:56298 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:25.272839 kubelet[2218]: E1213 14:20:25.272808 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:27.021573 systemd[1]: Started sshd@17-10.0.0.24:22-10.0.0.1:56308.service. Dec 13 14:20:27.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.24:22-10.0.0.1:56308 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:27.022963 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 14:20:27.023077 kernel: audit: type=1130 audit(1734099627.020:502): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.24:22-10.0.0.1:56308 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:27.058000 audit[5222]: USER_ACCT pid=5222 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:27.061686 sshd[5222]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:20:27.063870 sshd[5222]: Accepted publickey for core from 10.0.0.1 port 56308 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg Dec 13 14:20:27.060000 audit[5222]: CRED_ACQ pid=5222 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:27.066509 systemd-logind[1295]: New session 18 of user core. Dec 13 14:20:27.067004 systemd[1]: Started session-18.scope. Dec 13 14:20:27.068373 kernel: audit: type=1101 audit(1734099627.058:503): pid=5222 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:27.068420 kernel: audit: type=1103 audit(1734099627.060:504): pid=5222 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:27.068444 kernel: audit: type=1006 audit(1734099627.060:505): pid=5222 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Dec 13 14:20:27.060000 audit[5222]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffa2218500 a2=3 a3=0 items=0 ppid=1 pid=5222 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:27.074848 kernel: audit: type=1300 audit(1734099627.060:505): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffa2218500 a2=3 a3=0 items=0 ppid=1 pid=5222 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:27.074927 kernel: audit: type=1327 audit(1734099627.060:505): proctitle=737368643A20636F7265205B707269765D Dec 13 14:20:27.060000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:20:27.072000 audit[5222]: USER_START pid=5222 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:27.080494 kernel: audit: type=1105 audit(1734099627.072:506): pid=5222 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:27.080554 kernel: audit: type=1103 audit(1734099627.074:507): pid=5225 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:27.074000 audit[5225]: CRED_ACQ pid=5225 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:27.197349 sshd[5222]: pam_unix(sshd:session): session closed for user core Dec 13 14:20:27.197000 audit[5222]: USER_END pid=5222 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:27.199602 systemd[1]: sshd@17-10.0.0.24:22-10.0.0.1:56308.service: Deactivated successfully. Dec 13 14:20:27.200677 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 14:20:27.201211 systemd-logind[1295]: Session 18 logged out. Waiting for processes to exit. Dec 13 14:20:27.201927 systemd-logind[1295]: Removed session 18. Dec 13 14:20:27.197000 audit[5222]: CRED_DISP pid=5222 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:27.206468 kernel: audit: type=1106 audit(1734099627.197:508): pid=5222 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:27.206525 kernel: audit: type=1104 audit(1734099627.197:509): pid=5222 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:27.198000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.24:22-10.0.0.1:56308 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:27.272040 kubelet[2218]: E1213 14:20:27.271883 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:32.201562 systemd[1]: Started sshd@18-10.0.0.24:22-10.0.0.1:34374.service. Dec 13 14:20:32.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.24:22-10.0.0.1:34374 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:32.203175 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 14:20:32.203299 kernel: audit: type=1130 audit(1734099632.200:511): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.24:22-10.0.0.1:34374 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:32.237000 audit[5238]: USER_ACCT pid=5238 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:32.238569 sshd[5238]: Accepted publickey for core from 10.0.0.1 port 34374 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg Dec 13 14:20:32.240617 sshd[5238]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:20:32.239000 audit[5238]: CRED_ACQ pid=5238 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:32.248524 kernel: audit: type=1101 audit(1734099632.237:512): pid=5238 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:32.248606 kernel: audit: type=1103 audit(1734099632.239:513): pid=5238 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:32.248628 kernel: audit: type=1006 audit(1734099632.239:514): pid=5238 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Dec 13 14:20:32.248258 systemd-logind[1295]: New session 19 of user core. Dec 13 14:20:32.249398 systemd[1]: Started session-19.scope. Dec 13 14:20:32.239000 audit[5238]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc635cd5d0 a2=3 a3=0 items=0 ppid=1 pid=5238 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:32.255011 kernel: audit: type=1300 audit(1734099632.239:514): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc635cd5d0 a2=3 a3=0 items=0 ppid=1 pid=5238 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:32.239000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:20:32.253000 audit[5238]: USER_START pid=5238 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:32.261008 kernel: audit: type=1327 audit(1734099632.239:514): proctitle=737368643A20636F7265205B707269765D Dec 13 14:20:32.261078 kernel: audit: type=1105 audit(1734099632.253:515): pid=5238 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:32.261109 kernel: audit: type=1103 audit(1734099632.255:516): pid=5241 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:32.255000 audit[5241]: CRED_ACQ pid=5241 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:32.387502 sshd[5238]: pam_unix(sshd:session): session closed for user core Dec 13 14:20:32.388000 audit[5238]: USER_END pid=5238 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:32.399202 kernel: audit: type=1106 audit(1734099632.388:517): pid=5238 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:32.399283 kernel: audit: type=1104 audit(1734099632.388:518): pid=5238 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:32.388000 audit[5238]: CRED_DISP pid=5238 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:32.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.24:22-10.0.0.1:34390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:32.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.24:22-10.0.0.1:34374 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:32.390814 systemd[1]: Started sshd@19-10.0.0.24:22-10.0.0.1:34390.service. Dec 13 14:20:32.391467 systemd[1]: sshd@18-10.0.0.24:22-10.0.0.1:34374.service: Deactivated successfully. Dec 13 14:20:32.392991 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 14:20:32.396100 systemd-logind[1295]: Session 19 logged out. Waiting for processes to exit. Dec 13 14:20:32.397107 systemd-logind[1295]: Removed session 19. Dec 13 14:20:32.426000 audit[5250]: USER_ACCT pid=5250 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:32.427446 sshd[5250]: Accepted publickey for core from 10.0.0.1 port 34390 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg Dec 13 14:20:32.427000 audit[5250]: CRED_ACQ pid=5250 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:32.427000 audit[5250]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc7fe8cde0 a2=3 a3=0 items=0 ppid=1 pid=5250 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:32.427000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:20:32.428841 sshd[5250]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:20:32.432993 systemd-logind[1295]: New session 20 of user core. Dec 13 14:20:32.433901 systemd[1]: Started session-20.scope. Dec 13 14:20:32.438000 audit[5250]: USER_START pid=5250 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:32.440000 audit[5255]: CRED_ACQ pid=5255 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:32.929562 sshd[5250]: pam_unix(sshd:session): session closed for user core Dec 13 14:20:32.932073 systemd[1]: Started sshd@20-10.0.0.24:22-10.0.0.1:34400.service. Dec 13 14:20:32.930000 audit[5250]: USER_END pid=5250 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:32.930000 audit[5250]: CRED_DISP pid=5250 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:32.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.24:22-10.0.0.1:34400 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:32.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.24:22-10.0.0.1:34390 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:32.932872 systemd[1]: sshd@19-10.0.0.24:22-10.0.0.1:34390.service: Deactivated successfully. Dec 13 14:20:32.933605 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 14:20:32.935153 systemd-logind[1295]: Session 20 logged out. Waiting for processes to exit. Dec 13 14:20:32.935792 systemd-logind[1295]: Removed session 20. Dec 13 14:20:32.963000 audit[5262]: USER_ACCT pid=5262 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:32.964655 sshd[5262]: Accepted publickey for core from 10.0.0.1 port 34400 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg Dec 13 14:20:32.964000 audit[5262]: CRED_ACQ pid=5262 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:32.964000 audit[5262]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe61744870 a2=3 a3=0 items=0 ppid=1 pid=5262 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:32.964000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:20:32.965637 sshd[5262]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:20:32.969974 systemd[1]: Started session-21.scope. Dec 13 14:20:32.970159 systemd-logind[1295]: New session 21 of user core. Dec 13 14:20:32.973000 audit[5262]: USER_START pid=5262 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:32.975000 audit[5267]: CRED_ACQ pid=5267 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:34.561000 audit[5282]: NETFILTER_CFG table=filter:121 family=2 entries=20 op=nft_register_rule pid=5282 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:20:34.561000 audit[5282]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffca68b4aa0 a2=0 a3=7ffca68b4a8c items=0 ppid=2379 pid=5282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:34.561000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:20:34.573000 audit[5282]: NETFILTER_CFG table=nat:122 family=2 entries=22 op=nft_register_rule pid=5282 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:20:34.573000 audit[5282]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffca68b4aa0 a2=0 a3=0 items=0 ppid=2379 pid=5282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:34.573000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:20:34.575279 sshd[5262]: pam_unix(sshd:session): session closed for user core Dec 13 14:20:34.576000 audit[5262]: USER_END pid=5262 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:34.577987 systemd[1]: Started sshd@21-10.0.0.24:22-10.0.0.1:34404.service. Dec 13 14:20:34.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.24:22-10.0.0.1:34404 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:34.578000 audit[5262]: CRED_DISP pid=5262 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:34.581799 systemd[1]: sshd@20-10.0.0.24:22-10.0.0.1:34400.service: Deactivated successfully. Dec 13 14:20:34.581000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.24:22-10.0.0.1:34400 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:34.583238 systemd-logind[1295]: Session 21 logged out. Waiting for processes to exit. Dec 13 14:20:34.583323 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 14:20:34.584478 systemd-logind[1295]: Removed session 21. Dec 13 14:20:34.589000 audit[5288]: NETFILTER_CFG table=filter:123 family=2 entries=32 op=nft_register_rule pid=5288 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:20:34.589000 audit[5288]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffed7ae32a0 a2=0 a3=7ffed7ae328c items=0 ppid=2379 pid=5288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:34.589000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:20:34.593000 audit[5288]: NETFILTER_CFG table=nat:124 family=2 entries=22 op=nft_register_rule pid=5288 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:20:34.593000 audit[5288]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffed7ae32a0 a2=0 a3=0 items=0 ppid=2379 pid=5288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:34.593000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:20:34.612000 audit[5283]: USER_ACCT pid=5283 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:34.613930 sshd[5283]: Accepted publickey for core from 10.0.0.1 port 34404 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg Dec 13 14:20:34.613000 audit[5283]: CRED_ACQ pid=5283 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:34.613000 audit[5283]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeb4670040 a2=3 a3=0 items=0 ppid=1 pid=5283 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:34.613000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:20:34.615056 sshd[5283]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:20:34.619953 systemd-logind[1295]: New session 22 of user core. Dec 13 14:20:34.621115 systemd[1]: Started session-22.scope. Dec 13 14:20:34.625000 audit[5283]: USER_START pid=5283 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:34.627000 audit[5290]: CRED_ACQ pid=5290 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:34.928212 sshd[5283]: pam_unix(sshd:session): session closed for user core Dec 13 14:20:34.928000 audit[5283]: USER_END pid=5283 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:34.928000 audit[5283]: CRED_DISP pid=5283 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:34.931524 systemd[1]: Started sshd@22-10.0.0.24:22-10.0.0.1:34412.service. Dec 13 14:20:34.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.24:22-10.0.0.1:34412 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:34.932304 systemd[1]: sshd@21-10.0.0.24:22-10.0.0.1:34404.service: Deactivated successfully. Dec 13 14:20:34.931000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.24:22-10.0.0.1:34404 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:34.933817 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 14:20:34.934461 systemd-logind[1295]: Session 22 logged out. Waiting for processes to exit. Dec 13 14:20:34.935562 systemd-logind[1295]: Removed session 22. Dec 13 14:20:34.963000 audit[5298]: USER_ACCT pid=5298 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:34.964948 sshd[5298]: Accepted publickey for core from 10.0.0.1 port 34412 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg Dec 13 14:20:34.965000 audit[5298]: CRED_ACQ pid=5298 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:34.965000 audit[5298]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe2dc47800 a2=3 a3=0 items=0 ppid=1 pid=5298 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:34.965000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:20:34.967086 sshd[5298]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:20:34.971330 systemd-logind[1295]: New session 23 of user core. Dec 13 14:20:34.972184 systemd[1]: Started session-23.scope. Dec 13 14:20:34.976000 audit[5298]: USER_START pid=5298 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:34.978000 audit[5302]: CRED_ACQ pid=5302 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:35.099104 sshd[5298]: pam_unix(sshd:session): session closed for user core Dec 13 14:20:35.099000 audit[5298]: USER_END pid=5298 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:35.099000 audit[5298]: CRED_DISP pid=5298 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:35.101786 systemd[1]: sshd@22-10.0.0.24:22-10.0.0.1:34412.service: Deactivated successfully. Dec 13 14:20:35.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.24:22-10.0.0.1:34412 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:35.103028 systemd-logind[1295]: Session 23 logged out. Waiting for processes to exit. Dec 13 14:20:35.103048 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 14:20:35.103830 systemd-logind[1295]: Removed session 23. Dec 13 14:20:36.272311 kubelet[2218]: E1213 14:20:36.272256 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:38.271750 kubelet[2218]: E1213 14:20:38.271692 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:20:40.102368 systemd[1]: Started sshd@23-10.0.0.24:22-10.0.0.1:43662.service. Dec 13 14:20:40.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.24:22-10.0.0.1:43662 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:40.104058 kernel: kauditd_printk_skb: 57 callbacks suppressed Dec 13 14:20:40.104223 kernel: audit: type=1130 audit(1734099640.101:560): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.24:22-10.0.0.1:43662 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:40.134000 audit[5313]: USER_ACCT pid=5313 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:40.135296 sshd[5313]: Accepted publickey for core from 10.0.0.1 port 43662 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg Dec 13 14:20:40.140058 kernel: audit: type=1101 audit(1734099640.134:561): pid=5313 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:40.139000 audit[5313]: CRED_ACQ pid=5313 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:40.140535 sshd[5313]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:20:40.144494 systemd-logind[1295]: New session 24 of user core. Dec 13 14:20:40.145347 systemd[1]: Started session-24.scope. Dec 13 14:20:40.146877 kernel: audit: type=1103 audit(1734099640.139:562): pid=5313 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:40.146928 kernel: audit: type=1006 audit(1734099640.139:563): pid=5313 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Dec 13 14:20:40.139000 audit[5313]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe5b10da00 a2=3 a3=0 items=0 ppid=1 pid=5313 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:40.151576 kernel: audit: type=1300 audit(1734099640.139:563): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe5b10da00 a2=3 a3=0 items=0 ppid=1 pid=5313 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:40.151659 kernel: audit: type=1327 audit(1734099640.139:563): proctitle=737368643A20636F7265205B707269765D Dec 13 14:20:40.139000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:20:40.149000 audit[5313]: USER_START pid=5313 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:40.157930 kernel: audit: type=1105 audit(1734099640.149:564): pid=5313 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:40.158109 kernel: audit: type=1103 audit(1734099640.150:565): pid=5316 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:40.150000 audit[5316]: CRED_ACQ pid=5316 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:40.253281 sshd[5313]: pam_unix(sshd:session): session closed for user core Dec 13 14:20:40.253000 audit[5313]: USER_END pid=5313 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:40.255958 systemd[1]: sshd@23-10.0.0.24:22-10.0.0.1:43662.service: Deactivated successfully. Dec 13 14:20:40.257181 systemd-logind[1295]: Session 24 logged out. Waiting for processes to exit. Dec 13 14:20:40.257328 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 14:20:40.258302 systemd-logind[1295]: Removed session 24. Dec 13 14:20:40.253000 audit[5313]: CRED_DISP pid=5313 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:40.262597 kernel: audit: type=1106 audit(1734099640.253:566): pid=5313 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:40.262680 kernel: audit: type=1104 audit(1734099640.253:567): pid=5313 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:40.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.24:22-10.0.0.1:43662 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:41.753000 audit[5335]: NETFILTER_CFG table=filter:125 family=2 entries=20 op=nft_register_rule pid=5335 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:20:41.753000 audit[5335]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7fff1f861e40 a2=0 a3=7fff1f861e2c items=0 ppid=2379 pid=5335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:41.753000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:20:41.760000 audit[5335]: NETFILTER_CFG table=nat:126 family=2 entries=106 op=nft_register_chain pid=5335 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:20:41.760000 audit[5335]: SYSCALL arch=c000003e syscall=46 success=yes exit=49452 a0=3 a1=7fff1f861e40 a2=0 a3=7fff1f861e2c items=0 ppid=2379 pid=5335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:41.760000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:20:42.178671 kubelet[2218]: I1213 14:20:42.178526 2218 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 14:20:42.215000 audit[5338]: NETFILTER_CFG table=filter:127 family=2 entries=8 op=nft_register_rule pid=5338 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:20:42.215000 audit[5338]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffdf0d1cbc0 a2=0 a3=7ffdf0d1cbac items=0 ppid=2379 pid=5338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:42.215000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:20:42.225000 audit[5338]: NETFILTER_CFG table=nat:128 family=2 entries=58 op=nft_register_chain pid=5338 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:20:42.225000 audit[5338]: SYSCALL arch=c000003e syscall=46 success=yes exit=20452 a0=3 a1=7ffdf0d1cbc0 a2=0 a3=7ffdf0d1cbac items=0 ppid=2379 pid=5338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:42.225000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:20:45.257345 systemd[1]: Started sshd@24-10.0.0.24:22-10.0.0.1:43676.service. Dec 13 14:20:45.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.24:22-10.0.0.1:43676 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:45.258511 kernel: kauditd_printk_skb: 13 callbacks suppressed Dec 13 14:20:45.258570 kernel: audit: type=1130 audit(1734099645.256:573): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.24:22-10.0.0.1:43676 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:45.290000 audit[5339]: USER_ACCT pid=5339 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:45.291634 sshd[5339]: Accepted publickey for core from 10.0.0.1 port 43676 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg Dec 13 14:20:45.294000 audit[5339]: CRED_ACQ pid=5339 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:45.296145 sshd[5339]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:20:45.299811 kernel: audit: type=1101 audit(1734099645.290:574): pid=5339 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:45.299908 kernel: audit: type=1103 audit(1734099645.294:575): pid=5339 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:45.299940 kernel: audit: type=1006 audit(1734099645.294:576): pid=5339 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Dec 13 14:20:45.300312 systemd-logind[1295]: New session 25 of user core. Dec 13 14:20:45.301129 systemd[1]: Started session-25.scope. Dec 13 14:20:45.307057 kernel: audit: type=1300 audit(1734099645.294:576): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff454d6ab0 a2=3 a3=0 items=0 ppid=1 pid=5339 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:45.307181 kernel: audit: type=1327 audit(1734099645.294:576): proctitle=737368643A20636F7265205B707269765D Dec 13 14:20:45.294000 audit[5339]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff454d6ab0 a2=3 a3=0 items=0 ppid=1 pid=5339 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:45.294000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:20:45.305000 audit[5339]: USER_START pid=5339 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:45.312124 kernel: audit: type=1105 audit(1734099645.305:577): pid=5339 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:45.312174 kernel: audit: type=1103 audit(1734099645.307:578): pid=5342 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:45.307000 audit[5342]: CRED_ACQ pid=5342 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:45.430800 sshd[5339]: pam_unix(sshd:session): session closed for user core Dec 13 14:20:45.431000 audit[5339]: USER_END pid=5339 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:45.434055 systemd[1]: sshd@24-10.0.0.24:22-10.0.0.1:43676.service: Deactivated successfully. Dec 13 14:20:45.435278 systemd-logind[1295]: Session 25 logged out. Waiting for processes to exit. Dec 13 14:20:45.435298 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 14:20:45.436404 systemd-logind[1295]: Removed session 25. Dec 13 14:20:45.431000 audit[5339]: CRED_DISP pid=5339 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:45.441977 kernel: audit: type=1106 audit(1734099645.431:579): pid=5339 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:45.442128 kernel: audit: type=1104 audit(1734099645.431:580): pid=5339 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:45.433000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.24:22-10.0.0.1:43676 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:50.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.24:22-10.0.0.1:45072 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:50.435284 systemd[1]: Started sshd@25-10.0.0.24:22-10.0.0.1:45072.service. Dec 13 14:20:50.436528 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 14:20:50.436616 kernel: audit: type=1130 audit(1734099650.434:582): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.24:22-10.0.0.1:45072 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:50.469000 audit[5398]: USER_ACCT pid=5398 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:50.470867 sshd[5398]: Accepted publickey for core from 10.0.0.1 port 45072 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg Dec 13 14:20:50.472539 sshd[5398]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:20:50.471000 audit[5398]: CRED_ACQ pid=5398 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:50.476910 systemd-logind[1295]: New session 26 of user core. Dec 13 14:20:50.477734 systemd[1]: Started session-26.scope. Dec 13 14:20:50.479802 kernel: audit: type=1101 audit(1734099650.469:583): pid=5398 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:50.480125 kernel: audit: type=1103 audit(1734099650.471:584): pid=5398 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:50.480166 kernel: audit: type=1006 audit(1734099650.471:585): pid=5398 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Dec 13 14:20:50.471000 audit[5398]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeb6742970 a2=3 a3=0 items=0 ppid=1 pid=5398 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:50.487378 kernel: audit: type=1300 audit(1734099650.471:585): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeb6742970 a2=3 a3=0 items=0 ppid=1 pid=5398 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:50.487471 kernel: audit: type=1327 audit(1734099650.471:585): proctitle=737368643A20636F7265205B707269765D Dec 13 14:20:50.471000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:20:50.489044 kernel: audit: type=1105 audit(1734099650.482:586): pid=5398 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:50.482000 audit[5398]: USER_START pid=5398 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:50.483000 audit[5401]: CRED_ACQ pid=5401 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:50.498164 kernel: audit: type=1103 audit(1734099650.483:587): pid=5401 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:50.596491 sshd[5398]: pam_unix(sshd:session): session closed for user core Dec 13 14:20:50.596000 audit[5398]: USER_END pid=5398 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:50.599298 systemd[1]: sshd@25-10.0.0.24:22-10.0.0.1:45072.service: Deactivated successfully. Dec 13 14:20:50.600314 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 14:20:50.600412 systemd-logind[1295]: Session 26 logged out. Waiting for processes to exit. Dec 13 14:20:50.601436 systemd-logind[1295]: Removed session 26. Dec 13 14:20:50.596000 audit[5398]: CRED_DISP pid=5398 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:50.607201 kernel: audit: type=1106 audit(1734099650.596:588): pid=5398 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:50.607273 kernel: audit: type=1104 audit(1734099650.596:589): pid=5398 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:50.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.24:22-10.0.0.1:45072 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:55.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.24:22-10.0.0.1:45074 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:55.600614 systemd[1]: Started sshd@26-10.0.0.24:22-10.0.0.1:45074.service. Dec 13 14:20:55.601705 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 14:20:55.601823 kernel: audit: type=1130 audit(1734099655.599:591): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.24:22-10.0.0.1:45074 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:55.632000 audit[5412]: USER_ACCT pid=5412 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:55.633973 sshd[5412]: Accepted publickey for core from 10.0.0.1 port 45074 ssh2: RSA SHA256:EAWjiJIG7yD8wY8MRJ/aywn+PPpkYAApPiVa2OUhImg Dec 13 14:20:55.636449 sshd[5412]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:20:55.635000 audit[5412]: CRED_ACQ pid=5412 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:55.640612 systemd-logind[1295]: New session 27 of user core. Dec 13 14:20:55.641413 kernel: audit: type=1101 audit(1734099655.632:592): pid=5412 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:55.641470 kernel: audit: type=1103 audit(1734099655.635:593): pid=5412 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:55.641489 kernel: audit: type=1006 audit(1734099655.635:594): pid=5412 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Dec 13 14:20:55.641436 systemd[1]: Started session-27.scope. Dec 13 14:20:55.635000 audit[5412]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeba49dc60 a2=3 a3=0 items=0 ppid=1 pid=5412 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:55.648037 kernel: audit: type=1300 audit(1734099655.635:594): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeba49dc60 a2=3 a3=0 items=0 ppid=1 pid=5412 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:55.648629 kernel: audit: type=1327 audit(1734099655.635:594): proctitle=737368643A20636F7265205B707269765D Dec 13 14:20:55.635000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:20:55.645000 audit[5412]: USER_START pid=5412 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:55.653770 kernel: audit: type=1105 audit(1734099655.645:595): pid=5412 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:55.653857 kernel: audit: type=1103 audit(1734099655.647:596): pid=5415 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:55.647000 audit[5415]: CRED_ACQ pid=5415 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:55.755964 sshd[5412]: pam_unix(sshd:session): session closed for user core Dec 13 14:20:55.756000 audit[5412]: USER_END pid=5412 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:55.756000 audit[5412]: CRED_DISP pid=5412 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:55.759262 systemd[1]: sshd@26-10.0.0.24:22-10.0.0.1:45074.service: Deactivated successfully. Dec 13 14:20:55.760088 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 14:20:55.760571 systemd-logind[1295]: Session 27 logged out. Waiting for processes to exit. Dec 13 14:20:55.761367 systemd-logind[1295]: Removed session 27. Dec 13 14:20:55.766680 kernel: audit: type=1106 audit(1734099655.756:597): pid=5412 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:55.766731 kernel: audit: type=1104 audit(1734099655.756:598): pid=5412 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:20:55.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.24:22-10.0.0.1:45074 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:57.272941 kubelet[2218]: E1213 14:20:57.272908 2218 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"