Dec 13 01:57:50.841629 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Dec 12 23:50:37 -00 2024 Dec 13 01:57:50.841648 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 01:57:50.841656 kernel: BIOS-provided physical RAM map: Dec 13 01:57:50.841662 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 01:57:50.841667 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 01:57:50.841673 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 01:57:50.841679 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Dec 13 01:57:50.841685 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Dec 13 01:57:50.841691 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 01:57:50.841697 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 01:57:50.841702 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 01:57:50.841716 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 01:57:50.841723 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 13 01:57:50.841730 kernel: NX (Execute Disable) protection: active Dec 13 01:57:50.841740 kernel: SMBIOS 2.8 present. Dec 13 01:57:50.841746 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Dec 13 01:57:50.841752 kernel: Hypervisor detected: KVM Dec 13 01:57:50.841757 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 01:57:50.841763 kernel: kvm-clock: cpu 0, msr 9b19b001, primary cpu clock Dec 13 01:57:50.841769 kernel: kvm-clock: using sched offset of 2387178562 cycles Dec 13 01:57:50.841776 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 01:57:50.841782 kernel: tsc: Detected 2794.748 MHz processor Dec 13 01:57:50.841788 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:57:50.841795 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:57:50.841802 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Dec 13 01:57:50.841808 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:57:50.841814 kernel: Using GB pages for direct mapping Dec 13 01:57:50.841820 kernel: ACPI: Early table checksum verification disabled Dec 13 01:57:50.841826 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Dec 13 01:57:50.841832 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:57:50.841838 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:57:50.841844 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:57:50.841851 kernel: ACPI: FACS 0x000000009CFE0000 000040 Dec 13 01:57:50.841857 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:57:50.841863 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:57:50.841869 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:57:50.841875 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:57:50.841881 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Dec 13 01:57:50.841887 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Dec 13 01:57:50.841893 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Dec 13 01:57:50.841902 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Dec 13 01:57:50.841909 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Dec 13 01:57:50.841915 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Dec 13 01:57:50.841921 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Dec 13 01:57:50.841928 kernel: No NUMA configuration found Dec 13 01:57:50.841934 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Dec 13 01:57:50.841942 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Dec 13 01:57:50.841948 kernel: Zone ranges: Dec 13 01:57:50.841955 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:57:50.841961 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Dec 13 01:57:50.841967 kernel: Normal empty Dec 13 01:57:50.841974 kernel: Movable zone start for each node Dec 13 01:57:50.841980 kernel: Early memory node ranges Dec 13 01:57:50.841986 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 01:57:50.841993 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Dec 13 01:57:50.842000 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Dec 13 01:57:50.842007 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:57:50.842013 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 01:57:50.842020 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Dec 13 01:57:50.842026 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 01:57:50.842032 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 01:57:50.842039 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 01:57:50.842045 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 01:57:50.842052 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 01:57:50.842058 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 01:57:50.842066 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 01:57:50.842072 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 01:57:50.842078 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:57:50.842085 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 01:57:50.842091 kernel: TSC deadline timer available Dec 13 01:57:50.842098 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Dec 13 01:57:50.842104 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 13 01:57:50.842110 kernel: kvm-guest: setup PV sched yield Dec 13 01:57:50.842117 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 01:57:50.842124 kernel: Booting paravirtualized kernel on KVM Dec 13 01:57:50.842131 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:57:50.842137 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:4 nr_node_ids:1 Dec 13 01:57:50.842144 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u524288 Dec 13 01:57:50.842150 kernel: pcpu-alloc: s188696 r8192 d32488 u524288 alloc=1*2097152 Dec 13 01:57:50.842157 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 13 01:57:50.842163 kernel: kvm-guest: setup async PF for cpu 0 Dec 13 01:57:50.842180 kernel: kvm-guest: stealtime: cpu 0, msr 9cc1c0c0 Dec 13 01:57:50.842186 kernel: kvm-guest: PV spinlocks enabled Dec 13 01:57:50.842194 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 01:57:50.842200 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Dec 13 01:57:50.842207 kernel: Policy zone: DMA32 Dec 13 01:57:50.842214 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 01:57:50.842221 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:57:50.842228 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:57:50.842234 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:57:50.842241 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:57:50.842249 kernel: Memory: 2436696K/2571752K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47476K init, 4108K bss, 134796K reserved, 0K cma-reserved) Dec 13 01:57:50.842255 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 01:57:50.842262 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 01:57:50.842268 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 01:57:50.842274 kernel: rcu: Hierarchical RCU implementation. Dec 13 01:57:50.842281 kernel: rcu: RCU event tracing is enabled. Dec 13 01:57:50.842288 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 01:57:50.842294 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:57:50.842301 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:57:50.842309 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:57:50.842315 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 01:57:50.842321 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 13 01:57:50.842328 kernel: random: crng init done Dec 13 01:57:50.842334 kernel: Console: colour VGA+ 80x25 Dec 13 01:57:50.842340 kernel: printk: console [ttyS0] enabled Dec 13 01:57:50.842347 kernel: ACPI: Core revision 20210730 Dec 13 01:57:50.842353 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 01:57:50.842360 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:57:50.842368 kernel: x2apic enabled Dec 13 01:57:50.842374 kernel: Switched APIC routing to physical x2apic. Dec 13 01:57:50.842380 kernel: kvm-guest: setup PV IPIs Dec 13 01:57:50.842387 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 01:57:50.842393 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 01:57:50.842400 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Dec 13 01:57:50.842406 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 01:57:50.842413 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 13 01:57:50.842419 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 13 01:57:50.842431 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:57:50.842438 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 01:57:50.842445 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:57:50.842452 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 01:57:50.842459 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 13 01:57:50.842466 kernel: RETBleed: Mitigation: untrained return thunk Dec 13 01:57:50.842473 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 01:57:50.842480 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Dec 13 01:57:50.842487 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:57:50.842494 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:57:50.842501 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:57:50.842508 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:57:50.842515 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 01:57:50.842522 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:57:50.842528 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:57:50.842535 kernel: LSM: Security Framework initializing Dec 13 01:57:50.842543 kernel: SELinux: Initializing. Dec 13 01:57:50.842550 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:57:50.842556 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:57:50.842564 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 13 01:57:50.842570 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 13 01:57:50.842577 kernel: ... version: 0 Dec 13 01:57:50.842584 kernel: ... bit width: 48 Dec 13 01:57:50.842590 kernel: ... generic registers: 6 Dec 13 01:57:50.842597 kernel: ... value mask: 0000ffffffffffff Dec 13 01:57:50.842605 kernel: ... max period: 00007fffffffffff Dec 13 01:57:50.842612 kernel: ... fixed-purpose events: 0 Dec 13 01:57:50.842618 kernel: ... event mask: 000000000000003f Dec 13 01:57:50.842625 kernel: signal: max sigframe size: 1776 Dec 13 01:57:50.842632 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:57:50.842638 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:57:50.842645 kernel: x86: Booting SMP configuration: Dec 13 01:57:50.842652 kernel: .... node #0, CPUs: #1 Dec 13 01:57:50.842659 kernel: kvm-clock: cpu 1, msr 9b19b041, secondary cpu clock Dec 13 01:57:50.842665 kernel: kvm-guest: setup async PF for cpu 1 Dec 13 01:57:50.842673 kernel: kvm-guest: stealtime: cpu 1, msr 9cc9c0c0 Dec 13 01:57:50.842680 kernel: #2 Dec 13 01:57:50.842687 kernel: kvm-clock: cpu 2, msr 9b19b081, secondary cpu clock Dec 13 01:57:50.842693 kernel: kvm-guest: setup async PF for cpu 2 Dec 13 01:57:50.842700 kernel: kvm-guest: stealtime: cpu 2, msr 9cd1c0c0 Dec 13 01:57:50.842714 kernel: #3 Dec 13 01:57:50.842721 kernel: kvm-clock: cpu 3, msr 9b19b0c1, secondary cpu clock Dec 13 01:57:50.842727 kernel: kvm-guest: setup async PF for cpu 3 Dec 13 01:57:50.842734 kernel: kvm-guest: stealtime: cpu 3, msr 9cd9c0c0 Dec 13 01:57:50.842742 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 01:57:50.842749 kernel: smpboot: Max logical packages: 1 Dec 13 01:57:50.842756 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Dec 13 01:57:50.842762 kernel: devtmpfs: initialized Dec 13 01:57:50.842769 kernel: x86/mm: Memory block size: 128MB Dec 13 01:57:50.842776 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:57:50.842783 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 01:57:50.842790 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:57:50.842796 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:57:50.842804 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:57:50.842811 kernel: audit: type=2000 audit(1734055071.040:1): state=initialized audit_enabled=0 res=1 Dec 13 01:57:50.842818 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:57:50.842824 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:57:50.842831 kernel: cpuidle: using governor menu Dec 13 01:57:50.842838 kernel: ACPI: bus type PCI registered Dec 13 01:57:50.842844 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:57:50.842851 kernel: dca service started, version 1.12.1 Dec 13 01:57:50.842858 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 01:57:50.842868 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Dec 13 01:57:50.842876 kernel: PCI: Using configuration type 1 for base access Dec 13 01:57:50.842883 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:57:50.842892 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:57:50.842899 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:57:50.842905 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:57:50.842912 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:57:50.842919 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:57:50.842925 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:57:50.842933 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 01:57:50.842940 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 01:57:50.842947 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 01:57:50.842954 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:57:50.842960 kernel: ACPI: Interpreter enabled Dec 13 01:57:50.842967 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 01:57:50.842974 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:57:50.842981 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:57:50.842987 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 01:57:50.842995 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:57:50.843100 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:57:50.843185 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 13 01:57:50.843254 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 13 01:57:50.843263 kernel: PCI host bridge to bus 0000:00 Dec 13 01:57:50.843338 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 01:57:50.843401 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 01:57:50.843465 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 01:57:50.843526 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Dec 13 01:57:50.843586 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 01:57:50.843646 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Dec 13 01:57:50.843716 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:57:50.843801 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 01:57:50.843880 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Dec 13 01:57:50.843950 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Dec 13 01:57:50.844019 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Dec 13 01:57:50.844088 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Dec 13 01:57:50.844157 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 01:57:50.844272 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 01:57:50.844343 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Dec 13 01:57:50.844441 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Dec 13 01:57:50.844514 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Dec 13 01:57:50.844599 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Dec 13 01:57:50.844671 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 01:57:50.844752 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Dec 13 01:57:50.844821 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Dec 13 01:57:50.844895 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 01:57:50.844968 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Dec 13 01:57:50.845038 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Dec 13 01:57:50.845106 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Dec 13 01:57:50.845195 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Dec 13 01:57:50.845272 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 01:57:50.845341 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 01:57:50.845414 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 01:57:50.845485 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Dec 13 01:57:50.845552 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Dec 13 01:57:50.845626 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 01:57:50.845695 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 13 01:57:50.845712 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 01:57:50.845719 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 01:57:50.845726 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 01:57:50.845735 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 01:57:50.845742 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 01:57:50.845748 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 01:57:50.845755 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 01:57:50.845762 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 01:57:50.845769 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 01:57:50.845775 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 01:57:50.845782 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 01:57:50.845789 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 01:57:50.845797 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 01:57:50.845804 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 01:57:50.845810 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 01:57:50.845817 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 01:57:50.845824 kernel: iommu: Default domain type: Translated Dec 13 01:57:50.845831 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:57:50.845902 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 01:57:50.845971 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 01:57:50.846038 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 01:57:50.846050 kernel: vgaarb: loaded Dec 13 01:57:50.846057 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 01:57:50.846064 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 01:57:50.846071 kernel: PTP clock support registered Dec 13 01:57:50.846077 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:57:50.846084 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 01:57:50.846091 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 01:57:50.846098 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Dec 13 01:57:50.846106 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 01:57:50.846112 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 01:57:50.846119 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 01:57:50.846126 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:57:50.846133 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:57:50.846140 kernel: pnp: PnP ACPI init Dec 13 01:57:50.846235 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 01:57:50.846246 kernel: pnp: PnP ACPI: found 6 devices Dec 13 01:57:50.846253 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:57:50.846262 kernel: NET: Registered PF_INET protocol family Dec 13 01:57:50.846269 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:57:50.846276 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 01:57:50.846283 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:57:50.846290 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:57:50.846297 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Dec 13 01:57:50.846304 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 01:57:50.846311 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:57:50.846319 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:57:50.846325 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:57:50.846332 kernel: NET: Registered PF_XDP protocol family Dec 13 01:57:50.846394 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 01:57:50.846456 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 01:57:50.846518 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 01:57:50.846578 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Dec 13 01:57:50.846638 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 01:57:50.846698 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Dec 13 01:57:50.846717 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:57:50.846724 kernel: Initialise system trusted keyrings Dec 13 01:57:50.846731 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 01:57:50.846738 kernel: Key type asymmetric registered Dec 13 01:57:50.846745 kernel: Asymmetric key parser 'x509' registered Dec 13 01:57:50.846751 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 01:57:50.846758 kernel: io scheduler mq-deadline registered Dec 13 01:57:50.846765 kernel: io scheduler kyber registered Dec 13 01:57:50.846772 kernel: io scheduler bfq registered Dec 13 01:57:50.846780 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:57:50.846787 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 01:57:50.846794 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 01:57:50.846801 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 01:57:50.846808 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:57:50.846815 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:57:50.846821 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 01:57:50.846828 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 01:57:50.846835 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 01:57:50.846929 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 01:57:50.846946 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 01:57:50.847011 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 01:57:50.847074 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T01:57:50 UTC (1734055070) Dec 13 01:57:50.847136 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 13 01:57:50.847145 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:57:50.847152 kernel: Segment Routing with IPv6 Dec 13 01:57:50.847159 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:57:50.847225 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:57:50.847232 kernel: Key type dns_resolver registered Dec 13 01:57:50.847239 kernel: IPI shorthand broadcast: enabled Dec 13 01:57:50.847246 kernel: sched_clock: Marking stable (412022069, 101695577)->(561664501, -47946855) Dec 13 01:57:50.847252 kernel: registered taskstats version 1 Dec 13 01:57:50.847259 kernel: Loading compiled-in X.509 certificates Dec 13 01:57:50.847266 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: d9defb0205602bee9bb670636cbe5c74194fdb5e' Dec 13 01:57:50.847273 kernel: Key type .fscrypt registered Dec 13 01:57:50.847279 kernel: Key type fscrypt-provisioning registered Dec 13 01:57:50.847288 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:57:50.847295 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:57:50.847301 kernel: ima: No architecture policies found Dec 13 01:57:50.847308 kernel: clk: Disabling unused clocks Dec 13 01:57:50.847315 kernel: Freeing unused kernel image (initmem) memory: 47476K Dec 13 01:57:50.847322 kernel: Write protecting the kernel read-only data: 28672k Dec 13 01:57:50.847328 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 01:57:50.847335 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 01:57:50.847343 kernel: Run /init as init process Dec 13 01:57:50.847350 kernel: with arguments: Dec 13 01:57:50.847357 kernel: /init Dec 13 01:57:50.847363 kernel: with environment: Dec 13 01:57:50.847370 kernel: HOME=/ Dec 13 01:57:50.847376 kernel: TERM=linux Dec 13 01:57:50.847383 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:57:50.847392 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 01:57:50.847402 systemd[1]: Detected virtualization kvm. Dec 13 01:57:50.847410 systemd[1]: Detected architecture x86-64. Dec 13 01:57:50.847417 systemd[1]: Running in initrd. Dec 13 01:57:50.847424 systemd[1]: No hostname configured, using default hostname. Dec 13 01:57:50.847431 systemd[1]: Hostname set to . Dec 13 01:57:50.847439 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:57:50.847446 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:57:50.847454 systemd[1]: Started systemd-ask-password-console.path. Dec 13 01:57:50.847461 systemd[1]: Reached target cryptsetup.target. Dec 13 01:57:50.847469 systemd[1]: Reached target paths.target. Dec 13 01:57:50.847492 systemd[1]: Reached target slices.target. Dec 13 01:57:50.847501 systemd[1]: Reached target swap.target. Dec 13 01:57:50.847509 systemd[1]: Reached target timers.target. Dec 13 01:57:50.847516 systemd[1]: Listening on iscsid.socket. Dec 13 01:57:50.847525 systemd[1]: Listening on iscsiuio.socket. Dec 13 01:57:50.847532 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 01:57:50.847540 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 01:57:50.847547 systemd[1]: Listening on systemd-journald.socket. Dec 13 01:57:50.847555 systemd[1]: Listening on systemd-networkd.socket. Dec 13 01:57:50.847562 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 01:57:50.847570 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 01:57:50.847577 systemd[1]: Reached target sockets.target. Dec 13 01:57:50.847585 systemd[1]: Starting kmod-static-nodes.service... Dec 13 01:57:50.847593 systemd[1]: Finished network-cleanup.service. Dec 13 01:57:50.847601 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:57:50.847608 systemd[1]: Starting systemd-journald.service... Dec 13 01:57:50.847616 systemd[1]: Starting systemd-modules-load.service... Dec 13 01:57:50.847623 systemd[1]: Starting systemd-resolved.service... Dec 13 01:57:50.847630 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 01:57:50.847638 systemd[1]: Finished kmod-static-nodes.service. Dec 13 01:57:50.847645 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:57:50.847653 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 01:57:50.847664 systemd-journald[198]: Journal started Dec 13 01:57:50.847702 systemd-journald[198]: Runtime Journal (/run/log/journal/14ac50cfb94a42908e604b111faec081) is 6.0M, max 48.5M, 42.5M free. Dec 13 01:57:50.837221 systemd-modules-load[199]: Inserted module 'overlay' Dec 13 01:57:50.878473 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:57:50.878491 kernel: Bridge firewalling registered Dec 13 01:57:50.861634 systemd-resolved[200]: Positive Trust Anchors: Dec 13 01:57:50.861646 systemd-resolved[200]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:57:50.861681 systemd-resolved[200]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 01:57:50.864396 systemd-resolved[200]: Defaulting to hostname 'linux'. Dec 13 01:57:50.871743 systemd-modules-load[199]: Inserted module 'br_netfilter' Dec 13 01:57:50.889189 kernel: SCSI subsystem initialized Dec 13 01:57:50.889207 systemd[1]: Started systemd-journald.service. Dec 13 01:57:50.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:50.892285 systemd[1]: Started systemd-resolved.service. Dec 13 01:57:50.894621 kernel: audit: type=1130 audit(1734055070.890:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:50.894639 kernel: audit: type=1130 audit(1734055070.893:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:50.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:50.893758 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 01:57:50.898994 kernel: audit: type=1130 audit(1734055070.897:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:50.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:50.897892 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 01:57:50.903488 kernel: audit: type=1130 audit(1734055070.901:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:50.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:50.902281 systemd[1]: Reached target nss-lookup.target. Dec 13 01:57:50.908214 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:57:50.908629 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 01:57:50.913610 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:57:50.913626 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 01:57:50.913987 systemd-modules-load[199]: Inserted module 'dm_multipath' Dec 13 01:57:50.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:50.914455 systemd[1]: Finished systemd-modules-load.service. Dec 13 01:57:50.920458 kernel: audit: type=1130 audit(1734055070.915:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:50.915940 systemd[1]: Starting systemd-sysctl.service... Dec 13 01:57:50.924405 systemd[1]: Finished systemd-sysctl.service. Dec 13 01:57:50.928991 kernel: audit: type=1130 audit(1734055070.924:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:50.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:50.932427 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 01:57:50.938122 kernel: audit: type=1130 audit(1734055070.933:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:50.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:50.933817 systemd[1]: Starting dracut-cmdline.service... Dec 13 01:57:50.941132 dracut-cmdline[223]: dracut-dracut-053 Dec 13 01:57:50.942515 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 01:57:50.985190 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:57:51.001193 kernel: iscsi: registered transport (tcp) Dec 13 01:57:51.021320 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:57:51.021348 kernel: QLogic iSCSI HBA Driver Dec 13 01:57:51.045484 systemd[1]: Finished dracut-cmdline.service. Dec 13 01:57:51.051071 kernel: audit: type=1130 audit(1734055071.046:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:51.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:51.047047 systemd[1]: Starting dracut-pre-udev.service... Dec 13 01:57:51.090196 kernel: raid6: avx2x4 gen() 30502 MB/s Dec 13 01:57:51.107197 kernel: raid6: avx2x4 xor() 8061 MB/s Dec 13 01:57:51.124185 kernel: raid6: avx2x2 gen() 32630 MB/s Dec 13 01:57:51.141202 kernel: raid6: avx2x2 xor() 18877 MB/s Dec 13 01:57:51.158197 kernel: raid6: avx2x1 gen() 25801 MB/s Dec 13 01:57:51.175197 kernel: raid6: avx2x1 xor() 15106 MB/s Dec 13 01:57:51.192196 kernel: raid6: sse2x4 gen() 14575 MB/s Dec 13 01:57:51.209187 kernel: raid6: sse2x4 xor() 7228 MB/s Dec 13 01:57:51.226196 kernel: raid6: sse2x2 gen() 16108 MB/s Dec 13 01:57:51.243191 kernel: raid6: sse2x2 xor() 9589 MB/s Dec 13 01:57:51.260190 kernel: raid6: sse2x1 gen() 12249 MB/s Dec 13 01:57:51.277582 kernel: raid6: sse2x1 xor() 7688 MB/s Dec 13 01:57:51.277603 kernel: raid6: using algorithm avx2x2 gen() 32630 MB/s Dec 13 01:57:51.277613 kernel: raid6: .... xor() 18877 MB/s, rmw enabled Dec 13 01:57:51.278309 kernel: raid6: using avx2x2 recovery algorithm Dec 13 01:57:51.290187 kernel: xor: automatically using best checksumming function avx Dec 13 01:57:51.396204 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 01:57:51.405784 systemd[1]: Finished dracut-pre-udev.service. Dec 13 01:57:51.411337 kernel: audit: type=1130 audit(1734055071.406:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:51.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:51.411000 audit: BPF prog-id=7 op=LOAD Dec 13 01:57:51.411000 audit: BPF prog-id=8 op=LOAD Dec 13 01:57:51.412041 systemd[1]: Starting systemd-udevd.service... Dec 13 01:57:51.427815 systemd-udevd[401]: Using default interface naming scheme 'v252'. Dec 13 01:57:51.431874 systemd[1]: Started systemd-udevd.service. Dec 13 01:57:51.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:51.433310 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 01:57:51.445292 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Dec 13 01:57:51.470830 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 01:57:51.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:51.473728 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 01:57:51.514916 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 01:57:51.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:51.542205 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:57:51.557901 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:57:51.557958 kernel: AES CTR mode by8 optimization enabled Dec 13 01:57:51.558183 kernel: libata version 3.00 loaded. Dec 13 01:57:51.562759 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 01:57:51.577114 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 01:57:51.577256 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 01:57:51.577278 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 01:57:51.577384 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 01:57:51.577489 kernel: scsi host0: ahci Dec 13 01:57:51.577608 kernel: scsi host1: ahci Dec 13 01:57:51.577770 kernel: scsi host2: ahci Dec 13 01:57:51.577887 kernel: scsi host3: ahci Dec 13 01:57:51.578009 kernel: scsi host4: ahci Dec 13 01:57:51.578127 kernel: scsi host5: ahci Dec 13 01:57:51.578263 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Dec 13 01:57:51.578278 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Dec 13 01:57:51.578290 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Dec 13 01:57:51.578302 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Dec 13 01:57:51.578314 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Dec 13 01:57:51.578326 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Dec 13 01:57:51.578341 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:57:51.578353 kernel: GPT:9289727 != 19775487 Dec 13 01:57:51.578365 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:57:51.578376 kernel: GPT:9289727 != 19775487 Dec 13 01:57:51.578388 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:57:51.578399 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:57:51.883210 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 01:57:51.883286 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 01:57:51.884203 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 01:57:51.885211 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 01:57:51.886204 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 13 01:57:51.887767 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 13 01:57:51.887787 kernel: ata3.00: applying bridge limits Dec 13 01:57:51.889188 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 01:57:51.890186 kernel: ata3.00: configured for UDMA/100 Dec 13 01:57:51.892192 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 01:57:51.902191 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (447) Dec 13 01:57:51.908348 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 01:57:51.908635 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 01:57:51.914871 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 01:57:51.921367 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 01:57:51.925341 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 01:57:51.926912 systemd[1]: Starting disk-uuid.service... Dec 13 01:57:51.933193 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 13 01:57:51.950664 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 01:57:51.950689 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 13 01:57:52.071538 disk-uuid[536]: Primary Header is updated. Dec 13 01:57:52.071538 disk-uuid[536]: Secondary Entries is updated. Dec 13 01:57:52.071538 disk-uuid[536]: Secondary Header is updated. Dec 13 01:57:52.075596 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:57:52.077191 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:57:52.080207 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:57:53.080192 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:57:53.080544 disk-uuid[539]: The operation has completed successfully. Dec 13 01:57:53.101498 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:57:53.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:53.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:53.101572 systemd[1]: Finished disk-uuid.service. Dec 13 01:57:53.110938 systemd[1]: Starting verity-setup.service... Dec 13 01:57:53.122186 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Dec 13 01:57:53.138050 systemd[1]: Found device dev-mapper-usr.device. Dec 13 01:57:53.140267 systemd[1]: Mounting sysusr-usr.mount... Dec 13 01:57:53.142010 systemd[1]: Finished verity-setup.service. Dec 13 01:57:53.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:53.197190 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 01:57:53.197447 systemd[1]: Mounted sysusr-usr.mount. Dec 13 01:57:53.198386 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 01:57:53.199103 systemd[1]: Starting ignition-setup.service... Dec 13 01:57:53.201968 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 01:57:53.207819 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:57:53.207851 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:57:53.207864 kernel: BTRFS info (device vda6): has skinny extents Dec 13 01:57:53.214673 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:57:53.222463 systemd[1]: Finished ignition-setup.service. Dec 13 01:57:53.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:53.223506 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 01:57:53.255277 ignition[642]: Ignition 2.14.0 Dec 13 01:57:53.255286 ignition[642]: Stage: fetch-offline Dec 13 01:57:53.255327 ignition[642]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:57:53.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:53.259000 audit: BPF prog-id=9 op=LOAD Dec 13 01:57:53.257956 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 01:57:53.255334 ignition[642]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:57:53.260443 systemd[1]: Starting systemd-networkd.service... Dec 13 01:57:53.255415 ignition[642]: parsed url from cmdline: "" Dec 13 01:57:53.255418 ignition[642]: no config URL provided Dec 13 01:57:53.255422 ignition[642]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:57:53.255428 ignition[642]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:57:53.255441 ignition[642]: op(1): [started] loading QEMU firmware config module Dec 13 01:57:53.255446 ignition[642]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 01:57:53.261921 ignition[642]: op(1): [finished] loading QEMU firmware config module Dec 13 01:57:53.304378 ignition[642]: parsing config with SHA512: adca6ba91cdb087e94dbb91539a09ccfb0e9f89e43674dd3fd5b8d33ca28ea8eca4e7e7a82af4bf69f09346c47b852a68754b1099ec59d5a2c7ab07c210eb1b8 Dec 13 01:57:53.309927 unknown[642]: fetched base config from "system" Dec 13 01:57:53.309936 unknown[642]: fetched user config from "qemu" Dec 13 01:57:53.311911 ignition[642]: fetch-offline: fetch-offline passed Dec 13 01:57:53.311968 ignition[642]: Ignition finished successfully Dec 13 01:57:53.314306 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 01:57:53.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:53.323069 systemd-networkd[717]: lo: Link UP Dec 13 01:57:53.323076 systemd-networkd[717]: lo: Gained carrier Dec 13 01:57:53.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:53.323435 systemd-networkd[717]: Enumeration completed Dec 13 01:57:53.323504 systemd[1]: Started systemd-networkd.service. Dec 13 01:57:53.323608 systemd-networkd[717]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:57:53.324906 systemd-networkd[717]: eth0: Link UP Dec 13 01:57:53.324909 systemd-networkd[717]: eth0: Gained carrier Dec 13 01:57:53.325177 systemd[1]: Reached target network.target. Dec 13 01:57:53.325806 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 01:57:53.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:53.326416 systemd[1]: Starting ignition-kargs.service... Dec 13 01:57:53.328229 systemd[1]: Starting iscsiuio.service... Dec 13 01:57:53.332275 systemd[1]: Started iscsiuio.service. Dec 13 01:57:53.338631 iscsid[723]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 01:57:53.338631 iscsid[723]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 01:57:53.338631 iscsid[723]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 01:57:53.338631 iscsid[723]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 01:57:53.338631 iscsid[723]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 01:57:53.338631 iscsid[723]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 01:57:53.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:53.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:53.334530 systemd[1]: Starting iscsid.service... Dec 13 01:57:53.339032 ignition[719]: Ignition 2.14.0 Dec 13 01:57:53.340832 systemd[1]: Started iscsid.service. Dec 13 01:57:53.339039 ignition[719]: Stage: kargs Dec 13 01:57:53.346271 systemd-networkd[717]: eth0: DHCPv4 address 10.0.0.48/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:57:53.339127 ignition[719]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:57:53.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:53.361000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:53.346584 systemd[1]: Finished ignition-kargs.service. Dec 13 01:57:53.339137 ignition[719]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:57:53.348789 systemd[1]: Starting dracut-initqueue.service... Dec 13 01:57:53.340412 ignition[719]: kargs: kargs passed Dec 13 01:57:53.350556 systemd[1]: Starting ignition-disks.service... Dec 13 01:57:53.340446 ignition[719]: Ignition finished successfully Dec 13 01:57:53.358159 systemd[1]: Finished ignition-disks.service. Dec 13 01:57:53.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:53.356733 ignition[730]: Ignition 2.14.0 Dec 13 01:57:53.360313 systemd[1]: Finished dracut-initqueue.service. Dec 13 01:57:53.356738 ignition[730]: Stage: disks Dec 13 01:57:53.361859 systemd[1]: Reached target initrd-root-device.target. Dec 13 01:57:53.356815 ignition[730]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:57:53.362743 systemd[1]: Reached target local-fs-pre.target. Dec 13 01:57:53.356822 ignition[730]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:57:53.363542 systemd[1]: Reached target local-fs.target. Dec 13 01:57:53.357610 ignition[730]: disks: disks passed Dec 13 01:57:53.364314 systemd[1]: Reached target remote-fs-pre.target. Dec 13 01:57:53.357648 ignition[730]: Ignition finished successfully Dec 13 01:57:53.365820 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 01:57:53.366660 systemd[1]: Reached target remote-fs.target. Dec 13 01:57:53.367031 systemd[1]: Reached target sysinit.target. Dec 13 01:57:53.389443 systemd-fsck[750]: ROOT: clean, 621/553520 files, 56021/553472 blocks Dec 13 01:57:53.367364 systemd[1]: Reached target basic.target. Dec 13 01:57:53.368186 systemd[1]: Starting dracut-pre-mount.service... Dec 13 01:57:53.374476 systemd[1]: Finished dracut-pre-mount.service. Dec 13 01:57:53.376000 systemd[1]: Starting systemd-fsck-root.service... Dec 13 01:57:53.394630 systemd[1]: Finished systemd-fsck-root.service. Dec 13 01:57:53.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:53.396304 systemd[1]: Mounting sysroot.mount... Dec 13 01:57:53.402186 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 01:57:53.402224 systemd[1]: Mounted sysroot.mount. Dec 13 01:57:53.402941 systemd[1]: Reached target initrd-root-fs.target. Dec 13 01:57:53.405036 systemd[1]: Mounting sysroot-usr.mount... Dec 13 01:57:53.406247 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 01:57:53.406274 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:57:53.406290 systemd[1]: Reached target ignition-diskful.target. Dec 13 01:57:53.408325 systemd[1]: Mounted sysroot-usr.mount. Dec 13 01:57:53.410270 systemd[1]: Starting initrd-setup-root.service... Dec 13 01:57:53.415538 initrd-setup-root[760]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:57:53.416954 initrd-setup-root[768]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:57:53.418718 initrd-setup-root[776]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:57:53.420935 initrd-setup-root[784]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:57:53.440938 systemd[1]: Finished initrd-setup-root.service. Dec 13 01:57:53.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:53.442438 systemd[1]: Starting ignition-mount.service... Dec 13 01:57:53.443817 systemd[1]: Starting sysroot-boot.service... Dec 13 01:57:53.446967 bash[801]: umount: /sysroot/usr/share/oem: not mounted. Dec 13 01:57:53.453602 ignition[802]: INFO : Ignition 2.14.0 Dec 13 01:57:53.453602 ignition[802]: INFO : Stage: mount Dec 13 01:57:53.455803 ignition[802]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:57:53.455803 ignition[802]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:57:53.455803 ignition[802]: INFO : mount: mount passed Dec 13 01:57:53.455803 ignition[802]: INFO : Ignition finished successfully Dec 13 01:57:53.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:53.455896 systemd[1]: Finished ignition-mount.service. Dec 13 01:57:53.462516 systemd[1]: Finished sysroot-boot.service. Dec 13 01:57:53.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:54.148724 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 01:57:54.155194 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (811) Dec 13 01:57:54.155219 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:57:54.156573 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:57:54.156587 kernel: BTRFS info (device vda6): has skinny extents Dec 13 01:57:54.160315 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 01:57:54.161735 systemd[1]: Starting ignition-files.service... Dec 13 01:57:54.174211 ignition[831]: INFO : Ignition 2.14.0 Dec 13 01:57:54.174211 ignition[831]: INFO : Stage: files Dec 13 01:57:54.176323 ignition[831]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:57:54.176323 ignition[831]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:57:54.176323 ignition[831]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:57:54.180276 ignition[831]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:57:54.180276 ignition[831]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:57:54.180276 ignition[831]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:57:54.180276 ignition[831]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:57:54.180276 ignition[831]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:57:54.179907 unknown[831]: wrote ssh authorized keys file for user: core Dec 13 01:57:54.188673 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 01:57:54.188673 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 01:57:54.188673 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:57:54.188673 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 01:57:54.223597 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 01:57:54.310116 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:57:54.310116 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:57:54.313726 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:57:54.313726 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:57:54.317006 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:57:54.318638 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:57:54.320344 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:57:54.322001 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:57:54.323679 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:57:54.325456 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:57:54.327188 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:57:54.328850 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:57:54.331213 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:57:54.333579 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:57:54.335587 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 01:57:54.644284 systemd-networkd[717]: eth0: Gained IPv6LL Dec 13 01:57:54.670516 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 01:57:55.041407 ignition[831]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:57:55.041407 ignition[831]: INFO : files: op(c): [started] processing unit "containerd.service" Dec 13 01:57:55.045563 ignition[831]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 01:57:55.045563 ignition[831]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 01:57:55.045563 ignition[831]: INFO : files: op(c): [finished] processing unit "containerd.service" Dec 13 01:57:55.045563 ignition[831]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Dec 13 01:57:55.045563 ignition[831]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:57:55.045563 ignition[831]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:57:55.045563 ignition[831]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Dec 13 01:57:55.045563 ignition[831]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Dec 13 01:57:55.045563 ignition[831]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:57:55.045563 ignition[831]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:57:55.045563 ignition[831]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Dec 13 01:57:55.045563 ignition[831]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:57:55.045563 ignition[831]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:57:55.045563 ignition[831]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 01:57:55.045563 ignition[831]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:57:55.084703 ignition[831]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:57:55.086395 ignition[831]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 01:57:55.086395 ignition[831]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:57:55.086395 ignition[831]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:57:55.086395 ignition[831]: INFO : files: files passed Dec 13 01:57:55.086395 ignition[831]: INFO : Ignition finished successfully Dec 13 01:57:55.093151 systemd[1]: Finished ignition-files.service. Dec 13 01:57:55.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:55.095668 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 01:57:55.096135 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 01:57:55.096709 systemd[1]: Starting ignition-quench.service... Dec 13 01:57:55.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:55.100000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:55.099350 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:57:55.099432 systemd[1]: Finished ignition-quench.service. Dec 13 01:57:55.106434 initrd-setup-root-after-ignition[857]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Dec 13 01:57:55.109285 initrd-setup-root-after-ignition[859]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:57:55.111262 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 01:57:55.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:55.111843 systemd[1]: Reached target ignition-complete.target. Dec 13 01:57:55.115089 systemd[1]: Starting initrd-parse-etc.service... Dec 13 01:57:55.126552 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:57:55.126635 systemd[1]: Finished initrd-parse-etc.service. Dec 13 01:57:55.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:55.128000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:55.128545 systemd[1]: Reached target initrd-fs.target. Dec 13 01:57:55.129065 systemd[1]: Reached target initrd.target. Dec 13 01:57:55.131160 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 01:57:55.132360 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 01:57:55.140478 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 01:57:55.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:55.141605 systemd[1]: Starting initrd-cleanup.service... Dec 13 01:57:55.148416 systemd[1]: Stopped target nss-lookup.target. Dec 13 01:57:55.148974 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 01:57:55.150530 systemd[1]: Stopped target timers.target. Dec 13 01:57:55.150896 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:57:55.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:55.151011 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 01:57:55.153717 systemd[1]: Stopped target initrd.target. Dec 13 01:57:55.155568 systemd[1]: Stopped target basic.target. Dec 13 01:57:55.156887 systemd[1]: Stopped target ignition-complete.target. Dec 13 01:57:55.158486 systemd[1]: Stopped target ignition-diskful.target. Dec 13 01:57:55.159858 systemd[1]: Stopped target initrd-root-device.target. Dec 13 01:57:55.161501 systemd[1]: Stopped target remote-fs.target. Dec 13 01:57:55.163160 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 01:57:55.164764 systemd[1]: Stopped target sysinit.target. Dec 13 01:57:55.166289 systemd[1]: Stopped target local-fs.target. Dec 13 01:57:55.167795 systemd[1]: Stopped target local-fs-pre.target. Dec 13 01:57:55.169248 systemd[1]: Stopped target swap.target. Dec 13 01:57:55.172000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:55.170709 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:57:55.170824 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 01:57:55.175000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:55.172309 systemd[1]: Stopped target cryptsetup.target. Dec 13 01:57:55.176000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:55.173676 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:57:55.173799 systemd[1]: Stopped dracut-initqueue.service. Dec 13 01:57:55.175674 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:57:55.175788 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 01:57:55.177097 systemd[1]: Stopped target paths.target. Dec 13 01:57:55.178603 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:57:55.184194 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 01:57:55.184692 systemd[1]: Stopped target slices.target. Dec 13 01:57:55.184998 systemd[1]: Stopped target sockets.target. Dec 13 01:57:55.187657 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:57:55.187722 systemd[1]: Closed iscsid.socket. Dec 13 01:57:55.190000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:55.189058 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:57:55.191000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:55.189144 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 01:57:55.190619 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:57:55.190699 systemd[1]: Stopped ignition-files.service. Dec 13 01:57:55.193357 systemd[1]: Stopping ignition-mount.service... Dec 13 01:57:55.202154 kernel: kauditd_printk_skb: 36 callbacks suppressed Dec 13 01:57:55.202182 kernel: audit: type=1131 audit(1734055075.197:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:55.197000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:55.194003 systemd[1]: Stopping iscsiuio.service... Dec 13 01:57:55.196026 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:57:55.209293 kernel: audit: type=1131 audit(1734055075.204:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:55.204000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:55.196130 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 01:57:55.214029 kernel: audit: type=1131 audit(1734055075.209:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:55.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:55.198012 systemd[1]: Stopping sysroot-boot.service... Dec 13 01:57:55.202896 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:57:55.203014 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 01:57:55.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:55.207932 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:57:55.222842 kernel: audit: type=1131 audit(1734055075.216:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:55.208041 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 01:57:55.230275 kernel: audit: type=1130 audit(1734055075.223:51): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:55.230302 kernel: audit: type=1131 audit(1734055075.226:52): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:55.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:55.226000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:55.215452 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:57:55.216080 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 01:57:55.216178 systemd[1]: Stopped iscsiuio.service. Dec 13 01:57:55.219099 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:57:55.219188 systemd[1]: Closed iscsiuio.socket. Dec 13 01:57:55.222703 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:57:55.222790 systemd[1]: Finished initrd-cleanup.service. Dec 13 01:57:55.241289 ignition[872]: INFO : Ignition 2.14.0 Dec 13 01:57:55.241289 ignition[872]: INFO : Stage: umount Dec 13 01:57:55.243032 ignition[872]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:57:55.243032 ignition[872]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:57:55.245879 ignition[872]: INFO : umount: umount passed Dec 13 01:57:55.246764 ignition[872]: INFO : Ignition finished successfully Dec 13 01:57:55.248158 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:57:55.248254 systemd[1]: Stopped ignition-mount.service. Dec 13 01:57:55.253364 kernel: audit: type=1131 audit(1734055075.248:53): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:55.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:55.248888 systemd[1]: Stopped target network.target. Dec 13 01:57:55.253736 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:57:55.260070 kernel: audit: type=1131 audit(1734055075.253:54): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:55.260095 kernel: audit: type=1131 audit(1734055075.259:55): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:55.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:55.259000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:55.253773 systemd[1]: Stopped ignition-disks.service. Dec 13 01:57:55.268323 kernel: audit: type=1131 audit(1734055075.263:56): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:55.263000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:55.254087 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:57:55.254112 systemd[1]: Stopped ignition-kargs.service. Dec 13 01:57:55.260082 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:57:55.260118 systemd[1]: Stopped ignition-setup.service. Dec 13 01:57:55.264024 systemd[1]: Stopping systemd-networkd.service... Dec 13 01:57:55.268379 systemd[1]: Stopping systemd-resolved.service... Dec 13 01:57:55.275000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:55.272615 systemd-networkd[717]: eth0: DHCPv6 lease lost Dec 13 01:57:55.273715 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:57:55.278000 audit: BPF prog-id=9 op=UNLOAD Dec 13 01:57:55.273801 systemd[1]: Stopped systemd-networkd.service. Dec 13 01:57:55.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:55.281000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:55.276268 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:57:55.282000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:55.276310 systemd[1]: Closed systemd-networkd.socket. Dec 13 01:57:55.277761 systemd[1]: Stopping network-cleanup.service... Dec 13 01:57:55.278891 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:57:55.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:55.278927 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 01:57:55.279853 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:57:55.279884 systemd[1]: Stopped systemd-sysctl.service. Dec 13 01:57:55.290000 audit: BPF prog-id=6 op=UNLOAD Dec 13 01:57:55.281493 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:57:55.291000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:55.293000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:55.281524 systemd[1]: Stopped systemd-modules-load.service. Dec 13 01:57:55.282554 systemd[1]: Stopping systemd-udevd.service... Dec 13 01:57:55.285229 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 01:57:55.285601 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:57:55.297000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:55.285669 systemd[1]: Stopped systemd-resolved.service. Dec 13 01:57:55.291004 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:57:55.291089 systemd[1]: Stopped network-cleanup.service. Dec 13 01:57:55.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:55.292200 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:57:55.304000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:55.305000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:55.292303 systemd[1]: Stopped systemd-udevd.service. Dec 13 01:57:55.295752 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:57:55.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:55.295823 systemd[1]: Stopped sysroot-boot.service. Dec 13 01:57:55.309000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:55.297748 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:57:55.297778 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 01:57:55.299436 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:57:55.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:55.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:55.299459 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 01:57:55.301022 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:57:55.301066 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 01:57:55.302752 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:57:55.302794 systemd[1]: Stopped dracut-cmdline.service. Dec 13 01:57:55.304309 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:57:55.304340 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 01:57:55.306083 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:57:55.323000 audit: BPF prog-id=5 op=UNLOAD Dec 13 01:57:55.323000 audit: BPF prog-id=4 op=UNLOAD Dec 13 01:57:55.323000 audit: BPF prog-id=3 op=UNLOAD Dec 13 01:57:55.306113 systemd[1]: Stopped initrd-setup-root.service. Dec 13 01:57:55.308440 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 01:57:55.324000 audit: BPF prog-id=8 op=UNLOAD Dec 13 01:57:55.324000 audit: BPF prog-id=7 op=UNLOAD Dec 13 01:57:55.309442 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:57:55.309484 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 01:57:55.312877 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:57:55.312944 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 01:57:55.314046 systemd[1]: Reached target initrd-switch-root.target. Dec 13 01:57:55.316516 systemd[1]: Starting initrd-switch-root.service... Dec 13 01:57:55.321502 systemd[1]: Switching root. Dec 13 01:57:55.341901 iscsid[723]: iscsid shutting down. Dec 13 01:57:55.342624 systemd-journald[198]: Received SIGTERM from PID 1 (systemd). Dec 13 01:57:55.342663 systemd-journald[198]: Journal stopped Dec 13 01:57:57.952908 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 01:57:57.952958 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 01:57:57.952975 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 01:57:57.952990 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:57:57.953002 kernel: SELinux: policy capability open_perms=1 Dec 13 01:57:57.953020 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:57:57.953032 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:57:57.953044 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:57:57.953059 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:57:57.953072 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:57:57.953084 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:57:57.953098 systemd[1]: Successfully loaded SELinux policy in 37.857ms. Dec 13 01:57:57.953121 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.259ms. Dec 13 01:57:57.953136 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 01:57:57.953152 systemd[1]: Detected virtualization kvm. Dec 13 01:57:57.953177 systemd[1]: Detected architecture x86-64. Dec 13 01:57:57.953195 systemd[1]: Detected first boot. Dec 13 01:57:57.953209 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:57:57.953222 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 01:57:57.953235 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:57:57.953249 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 01:57:57.953268 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 01:57:57.953283 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:57:57.953298 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:57:57.953311 systemd[1]: Unnecessary job was removed for dev-vda6.device. Dec 13 01:57:57.953324 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 01:57:57.953340 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 01:57:57.953353 systemd[1]: Created slice system-getty.slice. Dec 13 01:57:57.953365 systemd[1]: Created slice system-modprobe.slice. Dec 13 01:57:57.953381 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 01:57:57.953395 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 01:57:57.953408 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 01:57:57.953421 systemd[1]: Created slice user.slice. Dec 13 01:57:57.953435 systemd[1]: Started systemd-ask-password-console.path. Dec 13 01:57:57.953448 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 01:57:57.953462 systemd[1]: Set up automount boot.automount. Dec 13 01:57:57.953478 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 01:57:57.953491 systemd[1]: Reached target integritysetup.target. Dec 13 01:57:57.953504 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 01:57:57.953525 systemd[1]: Reached target remote-fs.target. Dec 13 01:57:57.953539 systemd[1]: Reached target slices.target. Dec 13 01:57:57.953553 systemd[1]: Reached target swap.target. Dec 13 01:57:57.953566 systemd[1]: Reached target torcx.target. Dec 13 01:57:57.953582 systemd[1]: Reached target veritysetup.target. Dec 13 01:57:57.953596 systemd[1]: Listening on systemd-coredump.socket. Dec 13 01:57:57.953609 systemd[1]: Listening on systemd-initctl.socket. Dec 13 01:57:57.953637 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 01:57:57.953652 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 01:57:57.953666 systemd[1]: Listening on systemd-journald.socket. Dec 13 01:57:57.953678 systemd[1]: Listening on systemd-networkd.socket. Dec 13 01:57:57.953694 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 01:57:57.953709 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 01:57:57.953722 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 01:57:57.953739 systemd[1]: Mounting dev-hugepages.mount... Dec 13 01:57:57.953753 systemd[1]: Mounting dev-mqueue.mount... Dec 13 01:57:57.953767 systemd[1]: Mounting media.mount... Dec 13 01:57:57.953782 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:57:57.953797 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 01:57:57.953811 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 01:57:57.953826 systemd[1]: Mounting tmp.mount... Dec 13 01:57:57.953840 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 01:57:57.953853 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 01:57:57.953869 systemd[1]: Starting kmod-static-nodes.service... Dec 13 01:57:57.953881 systemd[1]: Starting modprobe@configfs.service... Dec 13 01:57:57.953895 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 01:57:57.953909 systemd[1]: Starting modprobe@drm.service... Dec 13 01:57:57.953922 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 01:57:57.953935 systemd[1]: Starting modprobe@fuse.service... Dec 13 01:57:57.953948 systemd[1]: Starting modprobe@loop.service... Dec 13 01:57:57.953964 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:57:57.953978 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Dec 13 01:57:57.953995 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Dec 13 01:57:57.954008 systemd[1]: Starting systemd-journald.service... Dec 13 01:57:57.954021 kernel: fuse: init (API version 7.34) Dec 13 01:57:57.954033 kernel: loop: module loaded Dec 13 01:57:57.954046 systemd[1]: Starting systemd-modules-load.service... Dec 13 01:57:57.954060 systemd[1]: Starting systemd-network-generator.service... Dec 13 01:57:57.954073 systemd[1]: Starting systemd-remount-fs.service... Dec 13 01:57:57.954086 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 01:57:57.954100 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:57:57.954116 systemd[1]: Mounted dev-hugepages.mount. Dec 13 01:57:57.954129 systemd[1]: Mounted dev-mqueue.mount. Dec 13 01:57:57.954142 systemd[1]: Mounted media.mount. Dec 13 01:57:57.954156 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 01:57:57.954186 systemd-journald[1018]: Journal started Dec 13 01:57:57.954233 systemd-journald[1018]: Runtime Journal (/run/log/journal/14ac50cfb94a42908e604b111faec081) is 6.0M, max 48.5M, 42.5M free. Dec 13 01:57:57.874000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 01:57:57.874000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 01:57:57.951000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 01:57:57.951000 audit[1018]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffcb138ad20 a2=4000 a3=7ffcb138adbc items=0 ppid=1 pid=1018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:57:57.951000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 01:57:57.957186 systemd[1]: Started systemd-journald.service. Dec 13 01:57:57.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:57.957817 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 01:57:57.958726 systemd[1]: Mounted tmp.mount. Dec 13 01:57:57.959740 systemd[1]: Finished kmod-static-nodes.service. Dec 13 01:57:57.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:57.960825 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:57:57.961005 systemd[1]: Finished modprobe@configfs.service. Dec 13 01:57:57.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:57.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:57.962082 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:57:57.962239 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 01:57:57.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:57.962000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:57.963275 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:57:57.963459 systemd[1]: Finished modprobe@drm.service. Dec 13 01:57:57.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:57.964000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:57.964704 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 01:57:57.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:57.965795 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:57:57.965949 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 01:57:57.966000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:57.966000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:57.967160 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:57:57.967335 systemd[1]: Finished modprobe@fuse.service. Dec 13 01:57:57.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:57.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:57.968393 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:57:57.968558 systemd[1]: Finished modprobe@loop.service. Dec 13 01:57:57.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:57.969000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:57.969750 systemd[1]: Finished systemd-modules-load.service. Dec 13 01:57:57.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:57.970996 systemd[1]: Finished systemd-network-generator.service. Dec 13 01:57:57.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:57.972329 systemd[1]: Finished systemd-remount-fs.service. Dec 13 01:57:57.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:57.973492 systemd[1]: Reached target network-pre.target. Dec 13 01:57:57.975369 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 01:57:57.977156 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 01:57:57.977929 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:57:57.979662 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 01:57:57.981255 systemd[1]: Starting systemd-journal-flush.service... Dec 13 01:57:57.982255 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:57:57.983021 systemd[1]: Starting systemd-random-seed.service... Dec 13 01:57:57.984021 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 01:57:57.984933 systemd[1]: Starting systemd-sysctl.service... Dec 13 01:57:57.985078 systemd-journald[1018]: Time spent on flushing to /var/log/journal/14ac50cfb94a42908e604b111faec081 is 12.242ms for 1033 entries. Dec 13 01:57:57.985078 systemd-journald[1018]: System Journal (/var/log/journal/14ac50cfb94a42908e604b111faec081) is 8.0M, max 195.6M, 187.6M free. Dec 13 01:57:58.008674 systemd-journald[1018]: Received client request to flush runtime journal. Dec 13 01:57:57.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:57.988256 systemd[1]: Starting systemd-sysusers.service... Dec 13 01:57:58.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:57.992018 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 01:57:57.993079 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 01:57:57.994118 systemd[1]: Finished systemd-random-seed.service. Dec 13 01:57:57.995049 systemd[1]: Reached target first-boot-complete.target. Dec 13 01:57:58.008730 systemd[1]: Finished systemd-sysctl.service. Dec 13 01:57:58.010396 systemd[1]: Finished systemd-journal-flush.service. Dec 13 01:57:58.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:58.018131 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 01:57:58.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:58.020527 systemd[1]: Starting systemd-udev-settle.service... Dec 13 01:57:58.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:58.021770 systemd[1]: Finished systemd-sysusers.service. Dec 13 01:57:58.023856 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 01:57:58.027423 udevadm[1064]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 01:57:58.037793 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 01:57:58.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:58.409921 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 01:57:58.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:58.412341 systemd[1]: Starting systemd-udevd.service... Dec 13 01:57:58.430026 systemd-udevd[1069]: Using default interface naming scheme 'v252'. Dec 13 01:57:58.443202 systemd[1]: Started systemd-udevd.service. Dec 13 01:57:58.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:58.447322 systemd[1]: Starting systemd-networkd.service... Dec 13 01:57:58.453220 systemd[1]: Starting systemd-userdbd.service... Dec 13 01:57:58.467617 systemd[1]: Found device dev-ttyS0.device. Dec 13 01:57:58.497009 systemd[1]: Started systemd-userdbd.service. Dec 13 01:57:58.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:58.511008 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 01:57:58.519227 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 01:57:58.525202 kernel: ACPI: button: Power Button [PWRF] Dec 13 01:57:58.535000 audit[1089]: AVC avc: denied { confidentiality } for pid=1089 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 01:57:58.535000 audit[1089]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55c00165be40 a1=337fc a2=7fd73036cbc5 a3=5 items=110 ppid=1069 pid=1089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:57:58.535000 audit: CWD cwd="/" Dec 13 01:57:58.535000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=1 name=(null) inode=15496 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=2 name=(null) inode=15496 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=3 name=(null) inode=15497 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=4 name=(null) inode=15496 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=5 name=(null) inode=15498 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=6 name=(null) inode=15496 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=7 name=(null) inode=15499 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=8 name=(null) inode=15499 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=9 name=(null) inode=15500 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=10 name=(null) inode=15499 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=11 name=(null) inode=15501 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=12 name=(null) inode=15499 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=13 name=(null) inode=15502 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=14 name=(null) inode=15499 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=15 name=(null) inode=15503 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=16 name=(null) inode=15499 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=17 name=(null) inode=15504 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=18 name=(null) inode=15496 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=19 name=(null) inode=15505 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=20 name=(null) inode=15505 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=21 name=(null) inode=15506 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=22 name=(null) inode=15505 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=23 name=(null) inode=15507 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=24 name=(null) inode=15505 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=25 name=(null) inode=15508 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=26 name=(null) inode=15505 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=27 name=(null) inode=15509 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=28 name=(null) inode=15505 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=29 name=(null) inode=15510 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=30 name=(null) inode=15496 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=31 name=(null) inode=15511 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=32 name=(null) inode=15511 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=33 name=(null) inode=15512 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=34 name=(null) inode=15511 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=35 name=(null) inode=15513 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=36 name=(null) inode=15511 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=37 name=(null) inode=15514 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=38 name=(null) inode=15511 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=39 name=(null) inode=15515 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=40 name=(null) inode=15511 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=41 name=(null) inode=15516 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=42 name=(null) inode=15496 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=43 name=(null) inode=15517 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=44 name=(null) inode=15517 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=45 name=(null) inode=15518 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=46 name=(null) inode=15517 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=47 name=(null) inode=15519 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=48 name=(null) inode=15517 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=49 name=(null) inode=15520 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=50 name=(null) inode=15517 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=51 name=(null) inode=15521 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=52 name=(null) inode=15517 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=53 name=(null) inode=15522 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=55 name=(null) inode=15523 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=56 name=(null) inode=15523 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=57 name=(null) inode=15524 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=58 name=(null) inode=15523 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=59 name=(null) inode=15525 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=60 name=(null) inode=15523 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=61 name=(null) inode=15526 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=62 name=(null) inode=15526 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=63 name=(null) inode=15527 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=64 name=(null) inode=15526 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=65 name=(null) inode=15528 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=66 name=(null) inode=15526 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=67 name=(null) inode=15529 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=68 name=(null) inode=15526 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=69 name=(null) inode=15530 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=70 name=(null) inode=15526 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=71 name=(null) inode=15531 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=72 name=(null) inode=15523 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=73 name=(null) inode=15532 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=74 name=(null) inode=15532 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=75 name=(null) inode=15533 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=76 name=(null) inode=15532 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=77 name=(null) inode=15534 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=78 name=(null) inode=15532 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=79 name=(null) inode=15535 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=80 name=(null) inode=15532 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=81 name=(null) inode=15536 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=82 name=(null) inode=15532 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=83 name=(null) inode=15537 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=84 name=(null) inode=15523 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=85 name=(null) inode=15538 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=86 name=(null) inode=15538 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=87 name=(null) inode=15539 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=88 name=(null) inode=15538 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=89 name=(null) inode=15540 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=90 name=(null) inode=15538 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=91 name=(null) inode=15541 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=92 name=(null) inode=15538 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=93 name=(null) inode=15542 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=94 name=(null) inode=15538 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=95 name=(null) inode=15543 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=96 name=(null) inode=15523 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=97 name=(null) inode=15544 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=98 name=(null) inode=15544 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=99 name=(null) inode=15545 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=100 name=(null) inode=15544 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=101 name=(null) inode=15546 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=102 name=(null) inode=15544 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=103 name=(null) inode=15547 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=104 name=(null) inode=15544 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=105 name=(null) inode=15548 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=106 name=(null) inode=15544 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=107 name=(null) inode=15549 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PATH item=109 name=(null) inode=15550 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:57:58.535000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 01:57:58.560905 systemd-networkd[1081]: lo: Link UP Dec 13 01:57:58.561227 systemd-networkd[1081]: lo: Gained carrier Dec 13 01:57:58.561710 systemd-networkd[1081]: Enumeration completed Dec 13 01:57:58.561889 systemd[1]: Started systemd-networkd.service. Dec 13 01:57:58.561992 systemd-networkd[1081]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:57:58.564763 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 01:57:58.565025 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 01:57:58.565042 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 01:57:58.565145 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 01:57:58.565406 systemd-networkd[1081]: eth0: Link UP Dec 13 01:57:58.565419 systemd-networkd[1081]: eth0: Gained carrier Dec 13 01:57:58.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:58.571230 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:57:58.582318 systemd-networkd[1081]: eth0: DHCPv4 address 10.0.0.48/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:57:58.621199 kernel: kvm: Nested Virtualization enabled Dec 13 01:57:58.621291 kernel: SVM: kvm: Nested Paging enabled Dec 13 01:57:58.621307 kernel: SVM: Virtual VMLOAD VMSAVE supported Dec 13 01:57:58.621319 kernel: SVM: Virtual GIF supported Dec 13 01:57:58.638181 kernel: EDAC MC: Ver: 3.0.0 Dec 13 01:57:58.662518 systemd[1]: Finished systemd-udev-settle.service. Dec 13 01:57:58.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:58.664423 systemd[1]: Starting lvm2-activation-early.service... Dec 13 01:57:58.670736 lvm[1105]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:57:58.689761 systemd[1]: Finished lvm2-activation-early.service. Dec 13 01:57:58.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:58.690755 systemd[1]: Reached target cryptsetup.target. Dec 13 01:57:58.692444 systemd[1]: Starting lvm2-activation.service... Dec 13 01:57:58.695766 lvm[1107]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:57:58.724306 systemd[1]: Finished lvm2-activation.service. Dec 13 01:57:58.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:58.725296 systemd[1]: Reached target local-fs-pre.target. Dec 13 01:57:58.726157 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:57:58.726193 systemd[1]: Reached target local-fs.target. Dec 13 01:57:58.727115 systemd[1]: Reached target machines.target. Dec 13 01:57:58.729056 systemd[1]: Starting ldconfig.service... Dec 13 01:57:58.730098 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 01:57:58.730141 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:57:58.731201 systemd[1]: Starting systemd-boot-update.service... Dec 13 01:57:58.732876 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 01:57:58.734836 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 01:57:58.736678 systemd[1]: Starting systemd-sysext.service... Dec 13 01:57:58.737719 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1110 (bootctl) Dec 13 01:57:58.738465 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 01:57:58.745740 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 01:57:58.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:58.748306 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 01:57:58.750976 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 01:57:58.751150 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 01:57:58.760184 kernel: loop0: detected capacity change from 0 to 211296 Dec 13 01:57:58.780620 systemd-fsck[1120]: fsck.fat 4.2 (2021-01-31) Dec 13 01:57:58.780620 systemd-fsck[1120]: /dev/vda1: 789 files, 119291/258078 clusters Dec 13 01:57:58.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:58.781017 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 01:57:58.783507 systemd[1]: Mounting boot.mount... Dec 13 01:57:58.789639 systemd[1]: Mounted boot.mount. Dec 13 01:57:58.992212 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:57:58.996395 systemd[1]: Finished systemd-boot-update.service. Dec 13 01:57:58.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:59.000365 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:57:59.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:59.001893 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 01:57:59.009188 kernel: loop1: detected capacity change from 0 to 211296 Dec 13 01:57:59.012736 (sd-sysext)[1132]: Using extensions 'kubernetes'. Dec 13 01:57:59.013077 (sd-sysext)[1132]: Merged extensions into '/usr'. Dec 13 01:57:59.028436 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:57:59.029978 systemd[1]: Mounting usr-share-oem.mount... Dec 13 01:57:59.030906 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 01:57:59.031994 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 01:57:59.033534 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 01:57:59.035366 systemd[1]: Starting modprobe@loop.service... Dec 13 01:57:59.036364 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 01:57:59.036467 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:57:59.036576 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:57:59.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:59.041000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:59.039026 systemd[1]: Mounted usr-share-oem.mount. Dec 13 01:57:59.040228 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:57:59.040361 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 01:57:59.041636 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:57:59.041758 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 01:57:59.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:59.042000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:59.043387 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:57:59.043526 systemd[1]: Finished modprobe@loop.service. Dec 13 01:57:59.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:59.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:59.044918 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:57:59.045002 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 01:57:59.046051 systemd[1]: Finished systemd-sysext.service. Dec 13 01:57:59.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:59.048645 systemd[1]: Starting ensure-sysext.service... Dec 13 01:57:59.050528 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 01:57:59.054818 ldconfig[1109]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:57:59.055996 systemd[1]: Reloading. Dec 13 01:57:59.059907 systemd-tmpfiles[1146]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 01:57:59.060889 systemd-tmpfiles[1146]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:57:59.062633 systemd-tmpfiles[1146]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:57:59.106666 /usr/lib/systemd/system-generators/torcx-generator[1166]: time="2024-12-13T01:57:59Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 01:57:59.107289 /usr/lib/systemd/system-generators/torcx-generator[1166]: time="2024-12-13T01:57:59Z" level=info msg="torcx already run" Dec 13 01:57:59.171249 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 01:57:59.171266 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 01:57:59.187461 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:57:59.240893 systemd[1]: Finished ldconfig.service. Dec 13 01:57:59.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:59.242735 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 01:57:59.243000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:59.245732 systemd[1]: Starting audit-rules.service... Dec 13 01:57:59.247498 systemd[1]: Starting clean-ca-certificates.service... Dec 13 01:57:59.249327 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 01:57:59.251654 systemd[1]: Starting systemd-resolved.service... Dec 13 01:57:59.253596 systemd[1]: Starting systemd-timesyncd.service... Dec 13 01:57:59.256033 systemd[1]: Starting systemd-update-utmp.service... Dec 13 01:57:59.257454 systemd[1]: Finished clean-ca-certificates.service. Dec 13 01:57:59.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:59.260537 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:57:59.261000 audit[1230]: SYSTEM_BOOT pid=1230 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 01:57:59.264113 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:57:59.264405 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 01:57:59.265618 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 01:57:59.267614 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 01:57:59.269463 systemd[1]: Starting modprobe@loop.service... Dec 13 01:57:59.270359 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 01:57:59.270524 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:57:59.270668 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:57:59.270784 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:57:59.272211 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:57:59.272351 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 01:57:59.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:59.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:59.274213 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 01:57:59.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:59.275964 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:57:59.276224 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 01:57:59.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:59.277000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:59.277885 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:57:59.278102 systemd[1]: Finished modprobe@loop.service. Dec 13 01:57:59.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:59.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:59.281538 systemd[1]: Finished systemd-update-utmp.service. Dec 13 01:57:59.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:59.282891 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:57:59.283071 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 01:57:59.284206 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 01:57:59.286041 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 01:57:59.287994 systemd[1]: Starting modprobe@loop.service... Dec 13 01:57:59.288855 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 01:57:59.288955 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:57:59.291501 systemd[1]: Starting systemd-update-done.service... Dec 13 01:57:59.292411 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:57:59.292507 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:57:59.293456 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:57:59.293611 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 01:57:59.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:59.294000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:59.294939 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:57:59.299033 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 01:57:59.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:59.300000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:59.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:59.301000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:59.300664 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:57:59.300879 systemd[1]: Finished modprobe@loop.service. Dec 13 01:57:59.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:57:59.302340 systemd[1]: Finished systemd-update-done.service. Dec 13 01:57:59.303753 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:57:59.303877 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 01:57:59.307000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 01:57:59.307000 audit[1256]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff45f32fb0 a2=420 a3=0 items=0 ppid=1218 pid=1256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:57:59.307000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 01:57:59.307623 augenrules[1256]: No rules Dec 13 01:57:59.307509 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:57:59.307789 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 01:57:59.309462 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 01:57:59.311622 systemd[1]: Starting modprobe@drm.service... Dec 13 01:57:59.313530 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 01:57:59.316072 systemd[1]: Starting modprobe@loop.service... Dec 13 01:57:59.318075 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 01:57:59.318231 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:57:59.319640 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 01:57:59.321304 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:57:59.321448 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:57:59.322997 systemd[1]: Finished audit-rules.service. Dec 13 01:57:59.324561 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:57:59.324749 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 01:57:59.326049 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:57:59.326250 systemd[1]: Finished modprobe@drm.service. Dec 13 01:57:59.327440 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:57:59.327585 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 01:57:59.328845 systemd[1]: Started systemd-timesyncd.service. Dec 13 01:58:00.328092 systemd-timesyncd[1227]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 01:58:00.328126 systemd-timesyncd[1227]: Initial clock synchronization to Fri 2024-12-13 01:58:00.328016 UTC. Dec 13 01:58:00.329172 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:58:00.329396 systemd[1]: Finished modprobe@loop.service. Dec 13 01:58:00.330806 systemd[1]: Reached target time-set.target. Dec 13 01:58:00.331783 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:58:00.331927 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 01:58:00.333556 systemd[1]: Finished ensure-sysext.service. Dec 13 01:58:00.344310 systemd-resolved[1225]: Positive Trust Anchors: Dec 13 01:58:00.344324 systemd-resolved[1225]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:58:00.344350 systemd-resolved[1225]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 01:58:00.351013 systemd-resolved[1225]: Defaulting to hostname 'linux'. Dec 13 01:58:00.352289 systemd[1]: Started systemd-resolved.service. Dec 13 01:58:00.353214 systemd[1]: Reached target network.target. Dec 13 01:58:00.354015 systemd[1]: Reached target nss-lookup.target. Dec 13 01:58:00.354856 systemd[1]: Reached target sysinit.target. Dec 13 01:58:00.355723 systemd[1]: Started motdgen.path. Dec 13 01:58:00.356453 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 01:58:00.357660 systemd[1]: Started logrotate.timer. Dec 13 01:58:00.358468 systemd[1]: Started mdadm.timer. Dec 13 01:58:00.359185 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 01:58:00.360047 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:58:00.360085 systemd[1]: Reached target paths.target. Dec 13 01:58:00.360852 systemd[1]: Reached target timers.target. Dec 13 01:58:00.361924 systemd[1]: Listening on dbus.socket. Dec 13 01:58:00.363720 systemd[1]: Starting docker.socket... Dec 13 01:58:00.365206 systemd[1]: Listening on sshd.socket. Dec 13 01:58:00.366013 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:58:00.366265 systemd[1]: Listening on docker.socket. Dec 13 01:58:00.367047 systemd[1]: Reached target sockets.target. Dec 13 01:58:00.367842 systemd[1]: Reached target basic.target. Dec 13 01:58:00.368690 systemd[1]: System is tainted: cgroupsv1 Dec 13 01:58:00.368742 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 01:58:00.368787 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 01:58:00.369535 systemd[1]: Starting containerd.service... Dec 13 01:58:00.371032 systemd[1]: Starting dbus.service... Dec 13 01:58:00.372721 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 01:58:00.374899 systemd[1]: Starting extend-filesystems.service... Dec 13 01:58:00.375793 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 01:58:00.376858 systemd[1]: Starting motdgen.service... Dec 13 01:58:00.379718 jq[1280]: false Dec 13 01:58:00.378737 systemd[1]: Starting prepare-helm.service... Dec 13 01:58:00.380677 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 01:58:00.394013 dbus-daemon[1279]: [system] SELinux support is enabled Dec 13 01:58:00.382813 systemd[1]: Starting sshd-keygen.service... Dec 13 01:58:00.385113 systemd[1]: Starting systemd-logind.service... Dec 13 01:58:00.385909 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 01:58:00.385960 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:58:00.386980 systemd[1]: Starting update-engine.service... Dec 13 01:58:00.402807 jq[1296]: true Dec 13 01:58:00.390272 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 01:58:00.392774 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:58:00.393035 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 01:58:00.394238 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:58:00.394513 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 01:58:00.395716 systemd[1]: Started dbus.service. Dec 13 01:58:00.400270 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:58:00.400299 systemd[1]: Reached target system-config.target. Dec 13 01:58:00.403694 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:58:00.407344 jq[1307]: true Dec 13 01:58:00.407412 tar[1301]: linux-amd64/helm Dec 13 01:58:00.403712 systemd[1]: Reached target user-config.target. Dec 13 01:58:00.409467 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:58:00.409700 systemd[1]: Finished motdgen.service. Dec 13 01:58:00.415744 extend-filesystems[1281]: Found loop1 Dec 13 01:58:00.416972 extend-filesystems[1281]: Found sr0 Dec 13 01:58:00.416972 extend-filesystems[1281]: Found vda Dec 13 01:58:00.416972 extend-filesystems[1281]: Found vda1 Dec 13 01:58:00.416972 extend-filesystems[1281]: Found vda2 Dec 13 01:58:00.416972 extend-filesystems[1281]: Found vda3 Dec 13 01:58:00.416972 extend-filesystems[1281]: Found usr Dec 13 01:58:00.416972 extend-filesystems[1281]: Found vda4 Dec 13 01:58:00.416972 extend-filesystems[1281]: Found vda6 Dec 13 01:58:00.416972 extend-filesystems[1281]: Found vda7 Dec 13 01:58:00.416972 extend-filesystems[1281]: Found vda9 Dec 13 01:58:00.416972 extend-filesystems[1281]: Checking size of /dev/vda9 Dec 13 01:58:00.442652 extend-filesystems[1281]: Resized partition /dev/vda9 Dec 13 01:58:00.447138 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 01:58:00.447168 extend-filesystems[1335]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 01:58:00.449522 update_engine[1295]: I1213 01:58:00.446810 1295 main.cc:92] Flatcar Update Engine starting Dec 13 01:58:00.454103 systemd[1]: Started update-engine.service. Dec 13 01:58:00.454193 update_engine[1295]: I1213 01:58:00.454114 1295 update_check_scheduler.cc:74] Next update check in 10m17s Dec 13 01:58:00.456263 env[1308]: time="2024-12-13T01:58:00.456226633Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 01:58:00.457405 systemd[1]: Started locksmithd.service. Dec 13 01:58:00.477811 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 01:58:00.504743 env[1308]: time="2024-12-13T01:58:00.488103864Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:58:00.504827 systemd-logind[1291]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 01:58:00.504844 systemd-logind[1291]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 01:58:00.505175 extend-filesystems[1335]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 01:58:00.505175 extend-filesystems[1335]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 01:58:00.505175 extend-filesystems[1335]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 01:58:00.523831 extend-filesystems[1281]: Resized filesystem in /dev/vda9 Dec 13 01:58:00.524891 bash[1336]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:58:00.505242 systemd-logind[1291]: New seat seat0. Dec 13 01:58:00.525035 env[1308]: time="2024-12-13T01:58:00.505584126Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:58:00.525035 env[1308]: time="2024-12-13T01:58:00.507935916Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:58:00.525035 env[1308]: time="2024-12-13T01:58:00.507964229Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:58:00.525035 env[1308]: time="2024-12-13T01:58:00.508310409Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:58:00.525035 env[1308]: time="2024-12-13T01:58:00.508331598Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:58:00.525035 env[1308]: time="2024-12-13T01:58:00.508345885Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 01:58:00.525035 env[1308]: time="2024-12-13T01:58:00.508373186Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:58:00.525035 env[1308]: time="2024-12-13T01:58:00.508466191Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:58:00.525035 env[1308]: time="2024-12-13T01:58:00.508725547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:58:00.525035 env[1308]: time="2024-12-13T01:58:00.508943506Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:58:00.510862 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:58:00.525415 env[1308]: time="2024-12-13T01:58:00.508961860Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:58:00.525415 env[1308]: time="2024-12-13T01:58:00.509024668Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 01:58:00.525415 env[1308]: time="2024-12-13T01:58:00.509038434Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:58:00.511118 systemd[1]: Finished extend-filesystems.service. Dec 13 01:58:00.525434 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 01:58:00.526904 systemd[1]: Started systemd-logind.service. Dec 13 01:58:00.529346 env[1308]: time="2024-12-13T01:58:00.529290975Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:58:00.529346 env[1308]: time="2024-12-13T01:58:00.529340628Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:58:00.529425 env[1308]: time="2024-12-13T01:58:00.529356508Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:58:00.529425 env[1308]: time="2024-12-13T01:58:00.529391223Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:58:00.529425 env[1308]: time="2024-12-13T01:58:00.529407674Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:58:00.529425 env[1308]: time="2024-12-13T01:58:00.529424736Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:58:00.529537 env[1308]: time="2024-12-13T01:58:00.529439263Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:58:00.529537 env[1308]: time="2024-12-13T01:58:00.529454912Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:58:00.529537 env[1308]: time="2024-12-13T01:58:00.529469059Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 01:58:00.529537 env[1308]: time="2024-12-13T01:58:00.529484778Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:58:00.529537 env[1308]: time="2024-12-13T01:58:00.529498344Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:58:00.529537 env[1308]: time="2024-12-13T01:58:00.529512129Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:58:00.529695 env[1308]: time="2024-12-13T01:58:00.529619120Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:58:00.529724 env[1308]: time="2024-12-13T01:58:00.529692107Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:58:00.530351 env[1308]: time="2024-12-13T01:58:00.530313863Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:58:00.530404 env[1308]: time="2024-12-13T01:58:00.530357645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:58:00.530404 env[1308]: time="2024-12-13T01:58:00.530375619Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:58:00.530455 env[1308]: time="2024-12-13T01:58:00.530429810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:58:00.530455 env[1308]: time="2024-12-13T01:58:00.530445129Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:58:00.530521 env[1308]: time="2024-12-13T01:58:00.530460698Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:58:00.530521 env[1308]: time="2024-12-13T01:58:00.530473362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:58:00.530521 env[1308]: time="2024-12-13T01:58:00.530487509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:58:00.530521 env[1308]: time="2024-12-13T01:58:00.530505202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:58:00.530521 env[1308]: time="2024-12-13T01:58:00.530517535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:58:00.530652 env[1308]: time="2024-12-13T01:58:00.530531241Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:58:00.530652 env[1308]: time="2024-12-13T01:58:00.530547551Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:58:00.530712 env[1308]: time="2024-12-13T01:58:00.530660934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:58:00.530712 env[1308]: time="2024-12-13T01:58:00.530677655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:58:00.530712 env[1308]: time="2024-12-13T01:58:00.530691712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:58:00.530712 env[1308]: time="2024-12-13T01:58:00.530703904Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:58:00.530875 env[1308]: time="2024-12-13T01:58:00.530721337Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 01:58:00.530875 env[1308]: time="2024-12-13T01:58:00.530733981Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:58:00.530875 env[1308]: time="2024-12-13T01:58:00.530752926Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 01:58:00.530875 env[1308]: time="2024-12-13T01:58:00.530808861Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:58:00.531108 env[1308]: time="2024-12-13T01:58:00.531024375Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:58:00.531108 env[1308]: time="2024-12-13T01:58:00.531102201Z" level=info msg="Connect containerd service" Dec 13 01:58:00.531746 env[1308]: time="2024-12-13T01:58:00.531137047Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:58:00.531746 env[1308]: time="2024-12-13T01:58:00.531648937Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:58:00.531856 env[1308]: time="2024-12-13T01:58:00.531801473Z" level=info msg="Start subscribing containerd event" Dec 13 01:58:00.531892 env[1308]: time="2024-12-13T01:58:00.531882284Z" level=info msg="Start recovering state" Dec 13 01:58:00.531969 env[1308]: time="2024-12-13T01:58:00.531946365Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:58:00.532008 env[1308]: time="2024-12-13T01:58:00.531962385Z" level=info msg="Start event monitor" Dec 13 01:58:00.532008 env[1308]: time="2024-12-13T01:58:00.531991329Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:58:00.532098 env[1308]: time="2024-12-13T01:58:00.531994405Z" level=info msg="Start snapshots syncer" Dec 13 01:58:00.532098 env[1308]: time="2024-12-13T01:58:00.532053095Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:58:00.532098 env[1308]: time="2024-12-13T01:58:00.532074715Z" level=info msg="Start streaming server" Dec 13 01:58:00.532101 systemd[1]: Started containerd.service. Dec 13 01:58:00.534554 env[1308]: time="2024-12-13T01:58:00.533722495Z" level=info msg="containerd successfully booted in 0.081055s" Dec 13 01:58:00.556220 locksmithd[1340]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:58:00.562560 systemd[1]: Created slice system-sshd.slice. Dec 13 01:58:00.824256 tar[1301]: linux-amd64/LICENSE Dec 13 01:58:00.824256 tar[1301]: linux-amd64/README.md Dec 13 01:58:00.828382 systemd[1]: Finished prepare-helm.service. Dec 13 01:58:01.338888 systemd-networkd[1081]: eth0: Gained IPv6LL Dec 13 01:58:01.340446 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 01:58:01.341866 systemd[1]: Reached target network-online.target. Dec 13 01:58:01.344115 systemd[1]: Starting kubelet.service... Dec 13 01:58:01.872456 systemd[1]: Started kubelet.service. Dec 13 01:58:02.017091 sshd_keygen[1309]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:58:02.034225 systemd[1]: Finished sshd-keygen.service. Dec 13 01:58:02.036499 systemd[1]: Starting issuegen.service... Dec 13 01:58:02.038061 systemd[1]: Started sshd@0-10.0.0.48:22-10.0.0.1:45946.service. Dec 13 01:58:02.041374 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:58:02.041607 systemd[1]: Finished issuegen.service. Dec 13 01:58:02.043917 systemd[1]: Starting systemd-user-sessions.service... Dec 13 01:58:02.050011 systemd[1]: Finished systemd-user-sessions.service. Dec 13 01:58:02.051978 systemd[1]: Started getty@tty1.service. Dec 13 01:58:02.054164 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 01:58:02.055198 systemd[1]: Reached target getty.target. Dec 13 01:58:02.056174 systemd[1]: Reached target multi-user.target. Dec 13 01:58:02.057997 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 01:58:02.063870 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 01:58:02.064052 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 01:58:02.067870 systemd[1]: Startup finished in 5.286s (kernel) + 5.687s (userspace) = 10.974s. Dec 13 01:58:02.078951 sshd[1381]: Accepted publickey for core from 10.0.0.1 port 45946 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 01:58:02.080443 sshd[1381]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:58:02.088363 systemd-logind[1291]: New session 1 of user core. Dec 13 01:58:02.089143 systemd[1]: Created slice user-500.slice. Dec 13 01:58:02.090055 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 01:58:02.098352 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 01:58:02.099338 systemd[1]: Starting user@500.service... Dec 13 01:58:02.101894 (systemd)[1395]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:58:02.166885 systemd[1395]: Queued start job for default target default.target. Dec 13 01:58:02.167070 systemd[1395]: Reached target paths.target. Dec 13 01:58:02.167085 systemd[1395]: Reached target sockets.target. Dec 13 01:58:02.167097 systemd[1395]: Reached target timers.target. Dec 13 01:58:02.167107 systemd[1395]: Reached target basic.target. Dec 13 01:58:02.167144 systemd[1395]: Reached target default.target. Dec 13 01:58:02.167164 systemd[1395]: Startup finished in 59ms. Dec 13 01:58:02.167262 systemd[1]: Started user@500.service. Dec 13 01:58:02.168415 systemd[1]: Started session-1.scope. Dec 13 01:58:02.218368 systemd[1]: Started sshd@1-10.0.0.48:22-10.0.0.1:45962.service. Dec 13 01:58:02.248079 sshd[1405]: Accepted publickey for core from 10.0.0.1 port 45962 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 01:58:02.254880 sshd[1405]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:58:02.258959 systemd-logind[1291]: New session 2 of user core. Dec 13 01:58:02.259227 systemd[1]: Started session-2.scope. Dec 13 01:58:02.315472 systemd[1]: Started sshd@2-10.0.0.48:22-10.0.0.1:45976.service. Dec 13 01:58:02.315755 sshd[1405]: pam_unix(sshd:session): session closed for user core Dec 13 01:58:02.318446 systemd[1]: sshd@1-10.0.0.48:22-10.0.0.1:45962.service: Deactivated successfully. Dec 13 01:58:02.319577 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:58:02.320170 systemd-logind[1291]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:58:02.320913 systemd-logind[1291]: Removed session 2. Dec 13 01:58:02.346690 sshd[1410]: Accepted publickey for core from 10.0.0.1 port 45976 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 01:58:02.347937 sshd[1410]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:58:02.351513 systemd-logind[1291]: New session 3 of user core. Dec 13 01:58:02.352280 systemd[1]: Started session-3.scope. Dec 13 01:58:02.363130 kubelet[1365]: E1213 01:58:02.363065 1365 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:58:02.364954 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:58:02.365090 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:58:02.402404 sshd[1410]: pam_unix(sshd:session): session closed for user core Dec 13 01:58:02.404669 systemd[1]: Started sshd@3-10.0.0.48:22-10.0.0.1:45986.service. Dec 13 01:58:02.405071 systemd[1]: sshd@2-10.0.0.48:22-10.0.0.1:45976.service: Deactivated successfully. Dec 13 01:58:02.405894 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:58:02.406297 systemd-logind[1291]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:58:02.407157 systemd-logind[1291]: Removed session 3. Dec 13 01:58:02.432847 sshd[1419]: Accepted publickey for core from 10.0.0.1 port 45986 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 01:58:02.433826 sshd[1419]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:58:02.436713 systemd-logind[1291]: New session 4 of user core. Dec 13 01:58:02.437303 systemd[1]: Started session-4.scope. Dec 13 01:58:02.489948 sshd[1419]: pam_unix(sshd:session): session closed for user core Dec 13 01:58:02.492309 systemd[1]: Started sshd@4-10.0.0.48:22-10.0.0.1:45996.service. Dec 13 01:58:02.492733 systemd[1]: sshd@3-10.0.0.48:22-10.0.0.1:45986.service: Deactivated successfully. Dec 13 01:58:02.493579 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:58:02.493672 systemd-logind[1291]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:58:02.494551 systemd-logind[1291]: Removed session 4. Dec 13 01:58:02.520730 sshd[1426]: Accepted publickey for core from 10.0.0.1 port 45996 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 01:58:02.521807 sshd[1426]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:58:02.524563 systemd-logind[1291]: New session 5 of user core. Dec 13 01:58:02.525189 systemd[1]: Started session-5.scope. Dec 13 01:58:02.579298 sudo[1431]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:58:02.579494 sudo[1431]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 01:58:02.585987 dbus-daemon[1279]: \xd0\xfd\x89\xc4\u007fU: received setenforce notice (enforcing=1660118240) Dec 13 01:58:02.588000 sudo[1431]: pam_unix(sudo:session): session closed for user root Dec 13 01:58:02.589564 sshd[1426]: pam_unix(sshd:session): session closed for user core Dec 13 01:58:02.591753 systemd[1]: Started sshd@5-10.0.0.48:22-10.0.0.1:46010.service. Dec 13 01:58:02.592160 systemd[1]: sshd@4-10.0.0.48:22-10.0.0.1:45996.service: Deactivated successfully. Dec 13 01:58:02.593074 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:58:02.593176 systemd-logind[1291]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:58:02.594289 systemd-logind[1291]: Removed session 5. Dec 13 01:58:02.620511 sshd[1433]: Accepted publickey for core from 10.0.0.1 port 46010 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 01:58:02.621320 sshd[1433]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:58:02.624375 systemd-logind[1291]: New session 6 of user core. Dec 13 01:58:02.625055 systemd[1]: Started session-6.scope. Dec 13 01:58:02.676382 sudo[1440]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:58:02.676551 sudo[1440]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 01:58:02.679251 sudo[1440]: pam_unix(sudo:session): session closed for user root Dec 13 01:58:02.683075 sudo[1439]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:58:02.683275 sudo[1439]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 01:58:02.691216 systemd[1]: Stopping audit-rules.service... Dec 13 01:58:02.691000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 13 01:58:02.692940 auditctl[1443]: No rules Dec 13 01:58:02.693332 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:58:02.693530 systemd[1]: Stopped audit-rules.service. Dec 13 01:58:02.694968 systemd[1]: Starting audit-rules.service... Dec 13 01:58:02.695967 kernel: kauditd_printk_skb: 216 callbacks suppressed Dec 13 01:58:02.696016 kernel: audit: type=1305 audit(1734055082.691:149): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 13 01:58:02.696039 kernel: audit: type=1300 audit(1734055082.691:149): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff8602e710 a2=420 a3=0 items=0 ppid=1 pid=1443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:02.691000 audit[1443]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff8602e710 a2=420 a3=0 items=0 ppid=1 pid=1443 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:02.691000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Dec 13 01:58:02.701921 kernel: audit: type=1327 audit(1734055082.691:149): proctitle=2F7362696E2F617564697463746C002D44 Dec 13 01:58:02.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:02.705340 kernel: audit: type=1131 audit(1734055082.692:150): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:02.715097 augenrules[1461]: No rules Dec 13 01:58:02.715704 systemd[1]: Finished audit-rules.service. Dec 13 01:58:02.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:02.716685 sudo[1439]: pam_unix(sudo:session): session closed for user root Dec 13 01:58:02.715000 audit[1439]: USER_END pid=1439 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 01:58:02.717985 sshd[1433]: pam_unix(sshd:session): session closed for user core Dec 13 01:58:02.721331 systemd[1]: Started sshd@6-10.0.0.48:22-10.0.0.1:46014.service. Dec 13 01:58:02.721692 systemd[1]: sshd@5-10.0.0.48:22-10.0.0.1:46010.service: Deactivated successfully. Dec 13 01:58:02.722410 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:58:02.722487 systemd-logind[1291]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:58:02.723600 systemd-logind[1291]: Removed session 6. Dec 13 01:58:02.738727 kernel: audit: type=1130 audit(1734055082.715:151): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:02.738880 kernel: audit: type=1106 audit(1734055082.715:152): pid=1439 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 01:58:02.738906 kernel: audit: type=1104 audit(1734055082.716:153): pid=1439 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 01:58:02.716000 audit[1439]: CRED_DISP pid=1439 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 01:58:02.718000 audit[1433]: USER_END pid=1433 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:58:02.746383 kernel: audit: type=1106 audit(1734055082.718:154): pid=1433 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:58:02.746427 kernel: audit: type=1104 audit(1734055082.718:155): pid=1433 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:58:02.718000 audit[1433]: CRED_DISP pid=1433 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:58:02.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.48:22-10.0.0.1:46014 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:02.753543 kernel: audit: type=1130 audit(1734055082.720:156): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.48:22-10.0.0.1:46014 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:02.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.48:22-10.0.0.1:46010 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:02.765000 audit[1466]: USER_ACCT pid=1466 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:58:02.766441 sshd[1466]: Accepted publickey for core from 10.0.0.1 port 46014 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 01:58:02.766000 audit[1466]: CRED_ACQ pid=1466 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:58:02.766000 audit[1466]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe33cf9640 a2=3 a3=0 items=0 ppid=1 pid=1466 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:02.766000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 01:58:02.767745 sshd[1466]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:58:02.771778 systemd-logind[1291]: New session 7 of user core. Dec 13 01:58:02.772532 systemd[1]: Started session-7.scope. Dec 13 01:58:02.776000 audit[1466]: USER_START pid=1466 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:58:02.777000 audit[1471]: CRED_ACQ pid=1471 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:58:02.823000 audit[1472]: USER_ACCT pid=1472 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 01:58:02.824000 audit[1472]: CRED_REFR pid=1472 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 01:58:02.824725 sudo[1472]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:58:02.824911 sudo[1472]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 01:58:02.825000 audit[1472]: USER_START pid=1472 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 01:58:02.852548 systemd[1]: Starting docker.service... Dec 13 01:58:02.879858 env[1483]: time="2024-12-13T01:58:02.879811139Z" level=info msg="Starting up" Dec 13 01:58:02.881820 env[1483]: time="2024-12-13T01:58:02.881792425Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 01:58:02.881820 env[1483]: time="2024-12-13T01:58:02.881812061Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 01:58:02.881900 env[1483]: time="2024-12-13T01:58:02.881834123Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 01:58:02.881900 env[1483]: time="2024-12-13T01:58:02.881842859Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 01:58:02.883163 env[1483]: time="2024-12-13T01:58:02.883147165Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 01:58:02.883163 env[1483]: time="2024-12-13T01:58:02.883160921Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 01:58:02.883163 env[1483]: time="2024-12-13T01:58:02.883171491Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 01:58:02.883261 env[1483]: time="2024-12-13T01:58:02.883179576Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 01:58:04.046595 env[1483]: time="2024-12-13T01:58:04.046551183Z" level=warning msg="Your kernel does not support cgroup blkio weight" Dec 13 01:58:04.046595 env[1483]: time="2024-12-13T01:58:04.046578033Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Dec 13 01:58:04.047017 env[1483]: time="2024-12-13T01:58:04.046810690Z" level=info msg="Loading containers: start." Dec 13 01:58:04.100000 audit[1518]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1518 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:58:04.100000 audit[1518]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffee6b8bec0 a2=0 a3=7ffee6b8beac items=0 ppid=1483 pid=1518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:04.100000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 13 01:58:04.102000 audit[1520]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1520 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:58:04.102000 audit[1520]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffcf8cbb780 a2=0 a3=7ffcf8cbb76c items=0 ppid=1483 pid=1520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:04.102000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 13 01:58:04.104000 audit[1522]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1522 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:58:04.104000 audit[1522]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffeed99a460 a2=0 a3=7ffeed99a44c items=0 ppid=1483 pid=1522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:04.104000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 13 01:58:04.105000 audit[1524]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1524 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:58:04.105000 audit[1524]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffdabd19c50 a2=0 a3=7ffdabd19c3c items=0 ppid=1483 pid=1524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:04.105000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 13 01:58:04.107000 audit[1526]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1526 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:58:04.107000 audit[1526]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd9419a790 a2=0 a3=7ffd9419a77c items=0 ppid=1483 pid=1526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:04.107000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Dec 13 01:58:04.122000 audit[1531]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1531 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:58:04.122000 audit[1531]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffdb69cffd0 a2=0 a3=7ffdb69cffbc items=0 ppid=1483 pid=1531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:04.122000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Dec 13 01:58:04.131000 audit[1533]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1533 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:58:04.131000 audit[1533]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd371fa060 a2=0 a3=7ffd371fa04c items=0 ppid=1483 pid=1533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:04.131000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 13 01:58:04.133000 audit[1535]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1535 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:58:04.133000 audit[1535]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffdafcd4970 a2=0 a3=7ffdafcd495c items=0 ppid=1483 pid=1535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:04.133000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 13 01:58:04.134000 audit[1537]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1537 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:58:04.134000 audit[1537]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffc53466f70 a2=0 a3=7ffc53466f5c items=0 ppid=1483 pid=1537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:04.134000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 13 01:58:04.143000 audit[1541]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1541 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:58:04.143000 audit[1541]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffce5979170 a2=0 a3=7ffce597915c items=0 ppid=1483 pid=1541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:04.143000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Dec 13 01:58:04.148000 audit[1542]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1542 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:58:04.148000 audit[1542]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe59ec0c50 a2=0 a3=7ffe59ec0c3c items=0 ppid=1483 pid=1542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:04.148000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 13 01:58:04.157791 kernel: Initializing XFRM netlink socket Dec 13 01:58:04.183940 env[1483]: time="2024-12-13T01:58:04.183884632Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 01:58:04.199000 audit[1550]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1550 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:58:04.199000 audit[1550]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffe983f6660 a2=0 a3=7ffe983f664c items=0 ppid=1483 pid=1550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:04.199000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 13 01:58:04.209000 audit[1553]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1553 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:58:04.209000 audit[1553]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7fff956cefe0 a2=0 a3=7fff956cefcc items=0 ppid=1483 pid=1553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:04.209000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 13 01:58:04.212000 audit[1556]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1556 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:58:04.212000 audit[1556]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffcfcb80390 a2=0 a3=7ffcfcb8037c items=0 ppid=1483 pid=1556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:04.212000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Dec 13 01:58:04.213000 audit[1558]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1558 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:58:04.213000 audit[1558]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffee4b5af50 a2=0 a3=7ffee4b5af3c items=0 ppid=1483 pid=1558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:04.213000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Dec 13 01:58:04.215000 audit[1560]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1560 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:58:04.215000 audit[1560]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7fff4dab9bd0 a2=0 a3=7fff4dab9bbc items=0 ppid=1483 pid=1560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:04.215000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 13 01:58:04.217000 audit[1562]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1562 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:58:04.217000 audit[1562]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7fff70d59c80 a2=0 a3=7fff70d59c6c items=0 ppid=1483 pid=1562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:04.217000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 13 01:58:04.218000 audit[1564]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1564 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:58:04.218000 audit[1564]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffe09c778e0 a2=0 a3=7ffe09c778cc items=0 ppid=1483 pid=1564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:04.218000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Dec 13 01:58:04.224000 audit[1567]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1567 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:58:04.224000 audit[1567]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffddd9acf40 a2=0 a3=7ffddd9acf2c items=0 ppid=1483 pid=1567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:04.224000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 13 01:58:04.226000 audit[1569]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1569 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:58:04.226000 audit[1569]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7fff84a09d80 a2=0 a3=7fff84a09d6c items=0 ppid=1483 pid=1569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:04.226000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 13 01:58:04.227000 audit[1571]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1571 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:58:04.227000 audit[1571]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7fff66b36470 a2=0 a3=7fff66b3645c items=0 ppid=1483 pid=1571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:04.227000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 13 01:58:04.229000 audit[1573]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1573 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:58:04.229000 audit[1573]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd1879a930 a2=0 a3=7ffd1879a91c items=0 ppid=1483 pid=1573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:04.229000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 13 01:58:04.230501 systemd-networkd[1081]: docker0: Link UP Dec 13 01:58:04.237000 audit[1577]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1577 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:58:04.237000 audit[1577]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffcab8e3010 a2=0 a3=7ffcab8e2ffc items=0 ppid=1483 pid=1577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:04.237000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Dec 13 01:58:04.243000 audit[1578]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1578 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:58:04.243000 audit[1578]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd20adc170 a2=0 a3=7ffd20adc15c items=0 ppid=1483 pid=1578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:04.243000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 13 01:58:04.244996 env[1483]: time="2024-12-13T01:58:04.244953057Z" level=info msg="Loading containers: done." Dec 13 01:58:04.259675 env[1483]: time="2024-12-13T01:58:04.259630463Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:58:04.259821 env[1483]: time="2024-12-13T01:58:04.259797747Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 01:58:04.259898 env[1483]: time="2024-12-13T01:58:04.259866626Z" level=info msg="Daemon has completed initialization" Dec 13 01:58:04.276886 systemd[1]: Started docker.service. Dec 13 01:58:04.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:04.282540 env[1483]: time="2024-12-13T01:58:04.282480835Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:58:04.942526 env[1308]: time="2024-12-13T01:58:04.942484037Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 01:58:05.896282 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1222973602.mount: Deactivated successfully. Dec 13 01:58:09.219322 env[1308]: time="2024-12-13T01:58:09.219257334Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:58:09.222177 env[1308]: time="2024-12-13T01:58:09.222128528Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:58:09.224062 env[1308]: time="2024-12-13T01:58:09.224037278Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:58:09.225600 env[1308]: time="2024-12-13T01:58:09.225574581Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:58:09.226213 env[1308]: time="2024-12-13T01:58:09.226181489Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 01:58:09.244978 env[1308]: time="2024-12-13T01:58:09.244944005Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 01:58:12.615909 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:58:12.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:12.616110 systemd[1]: Stopped kubelet.service. Dec 13 01:58:12.617302 kernel: kauditd_printk_skb: 84 callbacks suppressed Dec 13 01:58:12.617341 kernel: audit: type=1130 audit(1734055092.615:191): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:12.617599 systemd[1]: Starting kubelet.service... Dec 13 01:58:12.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:12.623804 kernel: audit: type=1131 audit(1734055092.615:192): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:12.686031 systemd[1]: Started kubelet.service. Dec 13 01:58:12.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:12.718796 kernel: audit: type=1130 audit(1734055092.685:193): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:12.991300 kubelet[1636]: E1213 01:58:12.991149 1636 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:58:12.995040 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:58:12.995173 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:58:12.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 01:58:12.999809 kernel: audit: type=1131 audit(1734055092.994:194): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 01:58:13.247306 env[1308]: time="2024-12-13T01:58:13.247147355Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:58:13.285675 env[1308]: time="2024-12-13T01:58:13.285617591Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:58:13.321558 env[1308]: time="2024-12-13T01:58:13.321497849Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:58:13.355873 env[1308]: time="2024-12-13T01:58:13.355829132Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:58:13.356509 env[1308]: time="2024-12-13T01:58:13.356467770Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 01:58:13.367260 env[1308]: time="2024-12-13T01:58:13.367224513Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 01:58:15.826003 env[1308]: time="2024-12-13T01:58:15.825944370Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:58:15.828231 env[1308]: time="2024-12-13T01:58:15.828204038Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:58:15.830598 env[1308]: time="2024-12-13T01:58:15.830565917Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:58:15.833987 env[1308]: time="2024-12-13T01:58:15.833959341Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:58:15.834986 env[1308]: time="2024-12-13T01:58:15.834957322Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 01:58:15.851355 env[1308]: time="2024-12-13T01:58:15.851312855Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 01:58:17.199794 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount281139174.mount: Deactivated successfully. Dec 13 01:58:18.331692 env[1308]: time="2024-12-13T01:58:18.331629254Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:58:18.431627 env[1308]: time="2024-12-13T01:58:18.431563997Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:58:18.450457 env[1308]: time="2024-12-13T01:58:18.450413687Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:58:18.465638 env[1308]: time="2024-12-13T01:58:18.465598774Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:58:18.466045 env[1308]: time="2024-12-13T01:58:18.466015335Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 01:58:18.486096 env[1308]: time="2024-12-13T01:58:18.486042083Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:58:20.190146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3017693921.mount: Deactivated successfully. Dec 13 01:58:22.120049 env[1308]: time="2024-12-13T01:58:22.119977459Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:58:22.122707 env[1308]: time="2024-12-13T01:58:22.122675078Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:58:22.125241 env[1308]: time="2024-12-13T01:58:22.125201165Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:58:22.127599 env[1308]: time="2024-12-13T01:58:22.127575898Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:58:22.128543 env[1308]: time="2024-12-13T01:58:22.128507205Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 01:58:22.145473 env[1308]: time="2024-12-13T01:58:22.145422197Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 01:58:22.693141 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount738211331.mount: Deactivated successfully. Dec 13 01:58:22.699069 env[1308]: time="2024-12-13T01:58:22.699035980Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:58:22.700841 env[1308]: time="2024-12-13T01:58:22.700811400Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:58:22.702269 env[1308]: time="2024-12-13T01:58:22.702235280Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:58:22.703598 env[1308]: time="2024-12-13T01:58:22.703563120Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:58:22.704112 env[1308]: time="2024-12-13T01:58:22.704082885Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 01:58:22.759433 env[1308]: time="2024-12-13T01:58:22.759393442Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 01:58:23.246135 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:58:23.246346 systemd[1]: Stopped kubelet.service. Dec 13 01:58:23.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:23.248297 systemd[1]: Starting kubelet.service... Dec 13 01:58:23.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:23.254045 kernel: audit: type=1130 audit(1734055103.245:195): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:23.254099 kernel: audit: type=1131 audit(1734055103.245:196): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:23.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:23.317693 systemd[1]: Started kubelet.service. Dec 13 01:58:23.321797 kernel: audit: type=1130 audit(1734055103.317:197): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:23.592562 kubelet[1685]: E1213 01:58:23.592420 1685 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:58:23.594890 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:58:23.595052 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:58:23.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 01:58:23.598790 kernel: audit: type=1131 audit(1734055103.594:198): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 01:58:23.906666 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1118581860.mount: Deactivated successfully. Dec 13 01:58:27.988001 env[1308]: time="2024-12-13T01:58:27.987918413Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:58:27.993964 env[1308]: time="2024-12-13T01:58:27.993894911Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:58:27.999046 env[1308]: time="2024-12-13T01:58:27.998959489Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:58:28.002678 env[1308]: time="2024-12-13T01:58:28.002613671Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:58:28.003167 env[1308]: time="2024-12-13T01:58:28.003118047Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 01:58:31.409366 systemd[1]: Stopped kubelet.service. Dec 13 01:58:31.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:31.408000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:31.414646 systemd[1]: Starting kubelet.service... Dec 13 01:58:31.424410 kernel: audit: type=1130 audit(1734055111.408:199): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:31.424595 kernel: audit: type=1131 audit(1734055111.408:200): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:31.448850 systemd[1]: Reloading. Dec 13 01:58:31.531036 /usr/lib/systemd/system-generators/torcx-generator[1795]: time="2024-12-13T01:58:31Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 01:58:31.531075 /usr/lib/systemd/system-generators/torcx-generator[1795]: time="2024-12-13T01:58:31Z" level=info msg="torcx already run" Dec 13 01:58:31.782787 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 01:58:31.782819 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 01:58:31.807168 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:58:31.906227 systemd[1]: Started kubelet.service. Dec 13 01:58:31.911872 kernel: audit: type=1130 audit(1734055111.906:201): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:31.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:31.912179 systemd[1]: Stopping kubelet.service... Dec 13 01:58:31.914464 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:58:31.913000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:31.914781 systemd[1]: Stopped kubelet.service. Dec 13 01:58:31.916740 systemd[1]: Starting kubelet.service... Dec 13 01:58:31.918853 kernel: audit: type=1131 audit(1734055111.913:202): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:32.024397 systemd[1]: Started kubelet.service. Dec 13 01:58:32.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:32.032203 kernel: audit: type=1130 audit(1734055112.024:203): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:32.133842 kubelet[1862]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:58:32.133842 kubelet[1862]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:58:32.135370 kubelet[1862]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:58:32.135370 kubelet[1862]: I1213 01:58:32.134529 1862 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:58:32.529761 kubelet[1862]: I1213 01:58:32.529545 1862 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:58:32.529761 kubelet[1862]: I1213 01:58:32.529627 1862 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:58:32.529985 kubelet[1862]: I1213 01:58:32.529968 1862 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:58:32.561111 kubelet[1862]: E1213 01:58:32.560642 1862 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.48:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.48:6443: connect: connection refused Dec 13 01:58:32.564115 kubelet[1862]: I1213 01:58:32.562147 1862 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:58:32.604212 kubelet[1862]: I1213 01:58:32.604143 1862 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:58:32.604687 kubelet[1862]: I1213 01:58:32.604658 1862 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:58:32.604896 kubelet[1862]: I1213 01:58:32.604866 1862 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:58:32.605027 kubelet[1862]: I1213 01:58:32.604899 1862 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:58:32.605027 kubelet[1862]: I1213 01:58:32.604909 1862 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:58:32.605102 kubelet[1862]: I1213 01:58:32.605032 1862 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:58:32.605141 kubelet[1862]: I1213 01:58:32.605120 1862 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:58:32.605141 kubelet[1862]: I1213 01:58:32.605135 1862 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:58:32.605221 kubelet[1862]: I1213 01:58:32.605160 1862 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:58:32.605221 kubelet[1862]: I1213 01:58:32.605186 1862 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:58:32.606317 kubelet[1862]: W1213 01:58:32.605921 1862 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Dec 13 01:58:32.606317 kubelet[1862]: E1213 01:58:32.606002 1862 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Dec 13 01:58:32.606317 kubelet[1862]: W1213 01:58:32.606250 1862 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.48:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Dec 13 01:58:32.606317 kubelet[1862]: E1213 01:58:32.606276 1862 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.48:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Dec 13 01:58:32.607015 kubelet[1862]: I1213 01:58:32.606987 1862 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 01:58:32.610350 kubelet[1862]: I1213 01:58:32.610266 1862 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:58:32.614904 kubelet[1862]: W1213 01:58:32.614842 1862 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:58:32.615874 kubelet[1862]: I1213 01:58:32.615847 1862 server.go:1256] "Started kubelet" Dec 13 01:58:32.616536 kubelet[1862]: I1213 01:58:32.616252 1862 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:58:32.616000 audit[1862]: AVC avc: denied { mac_admin } for pid=1862 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:58:32.617338 kubelet[1862]: I1213 01:58:32.616682 1862 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:58:32.617338 kubelet[1862]: I1213 01:58:32.616743 1862 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:58:32.618019 kubelet[1862]: I1213 01:58:32.617726 1862 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:58:32.620136 kubelet[1862]: I1213 01:58:32.619869 1862 kubelet.go:1417] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Dec 13 01:58:32.620136 kubelet[1862]: I1213 01:58:32.619929 1862 kubelet.go:1421] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Dec 13 01:58:32.620136 kubelet[1862]: I1213 01:58:32.620059 1862 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:58:32.624303 kernel: audit: type=1400 audit(1734055112.616:204): avc: denied { mac_admin } for pid=1862 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:58:32.624439 kernel: audit: type=1401 audit(1734055112.616:204): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 01:58:32.624458 kernel: audit: type=1300 audit(1734055112.616:204): arch=c000003e syscall=188 success=no exit=-22 a0=c0005e0f90 a1=c000b3ea38 a2=c0005e0f60 a3=25 items=0 ppid=1 pid=1862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:32.616000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 01:58:32.616000 audit[1862]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0005e0f90 a1=c000b3ea38 a2=c0005e0f60 a3=25 items=0 ppid=1 pid=1862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:32.630151 kubelet[1862]: E1213 01:58:32.630122 1862 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:58:32.630354 kubelet[1862]: I1213 01:58:32.630338 1862 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:58:32.630552 kubelet[1862]: I1213 01:58:32.630536 1862 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:58:32.630709 kubelet[1862]: I1213 01:58:32.630695 1862 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:58:32.616000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 01:58:32.632149 kubelet[1862]: W1213 01:58:32.631782 1862 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Dec 13 01:58:32.632149 kubelet[1862]: E1213 01:58:32.631843 1862 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Dec 13 01:58:32.635569 kubelet[1862]: E1213 01:58:32.635093 1862 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.48:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.48:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181099f0006a2f62 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 01:58:32.615800674 +0000 UTC m=+0.588048662,LastTimestamp:2024-12-13 01:58:32.615800674 +0000 UTC m=+0.588048662,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 01:58:32.636266 kubelet[1862]: E1213 01:58:32.636243 1862 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="200ms" Dec 13 01:58:32.636722 kernel: audit: type=1327 audit(1734055112.616:204): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 01:58:32.619000 audit[1862]: AVC avc: denied { mac_admin } for pid=1862 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:58:32.637664 kubelet[1862]: I1213 01:58:32.637067 1862 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:58:32.637664 kubelet[1862]: I1213 01:58:32.637160 1862 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:58:32.639358 kubelet[1862]: I1213 01:58:32.639318 1862 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:58:32.640221 kubelet[1862]: E1213 01:58:32.640192 1862 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:58:32.619000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 01:58:32.619000 audit[1862]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000141260 a1=c000b3ea50 a2=c0005e1020 a3=25 items=0 ppid=1 pid=1862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:32.640857 kernel: audit: type=1400 audit(1734055112.619:205): avc: denied { mac_admin } for pid=1862 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:58:32.619000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 01:58:32.626000 audit[1874]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1874 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:58:32.626000 audit[1874]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffcf4b93e60 a2=0 a3=7ffcf4b93e4c items=0 ppid=1862 pid=1874 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:32.626000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 13 01:58:32.628000 audit[1875]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1875 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:58:32.628000 audit[1875]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff93fd81f0 a2=0 a3=7fff93fd81dc items=0 ppid=1862 pid=1875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:32.628000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 13 01:58:32.635000 audit[1877]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1877 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:58:32.635000 audit[1877]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff5d818760 a2=0 a3=7fff5d81874c items=0 ppid=1862 pid=1877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:32.635000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 01:58:32.640000 audit[1880]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1880 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:58:32.640000 audit[1880]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fffd9ac07c0 a2=0 a3=7fffd9ac07ac items=0 ppid=1862 pid=1880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:32.640000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 01:58:32.669000 audit[1886]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1886 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:58:32.669000 audit[1886]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fffd2d5eba0 a2=0 a3=7fffd2d5eb8c items=0 ppid=1862 pid=1886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:32.669000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 13 01:58:32.670737 kubelet[1862]: I1213 01:58:32.670465 1862 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:58:32.672000 audit[1888]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1888 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:58:32.672000 audit[1888]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd78439890 a2=0 a3=7ffd7843987c items=0 ppid=1862 pid=1888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:32.672000 audit[1889]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1889 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:58:32.672000 audit[1889]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff67eb9a40 a2=0 a3=7fff67eb9a2c items=0 ppid=1862 pid=1889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:32.672000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 13 01:58:32.672000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 13 01:58:32.674967 kubelet[1862]: I1213 01:58:32.674916 1862 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:58:32.675251 kubelet[1862]: I1213 01:58:32.675234 1862 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:58:32.675369 kubelet[1862]: I1213 01:58:32.675352 1862 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:58:32.675604 kubelet[1862]: E1213 01:58:32.675587 1862 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:58:32.675000 audit[1891]: NETFILTER_CFG table=nat:33 family=2 entries=1 op=nft_register_chain pid=1891 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:58:32.675000 audit[1891]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcf85d4f90 a2=0 a3=7ffcf85d4f7c items=0 ppid=1862 pid=1891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:32.675000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 13 01:58:32.676890 kubelet[1862]: W1213 01:58:32.676194 1862 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Dec 13 01:58:32.676967 kubelet[1862]: E1213 01:58:32.676906 1862 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Dec 13 01:58:32.677000 audit[1893]: NETFILTER_CFG table=mangle:34 family=10 entries=1 op=nft_register_chain pid=1893 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:58:32.677000 audit[1893]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc4d1a6b40 a2=0 a3=7ffc4d1a6b2c items=0 ppid=1862 pid=1893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:32.677000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 13 01:58:32.679466 kubelet[1862]: I1213 01:58:32.678644 1862 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:58:32.679466 kubelet[1862]: I1213 01:58:32.678664 1862 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:58:32.679466 kubelet[1862]: I1213 01:58:32.678681 1862 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:58:32.677000 audit[1895]: NETFILTER_CFG table=nat:35 family=10 entries=2 op=nft_register_chain pid=1895 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:58:32.677000 audit[1895]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffdf2707110 a2=0 a3=7ffdf27070fc items=0 ppid=1862 pid=1895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:32.677000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 13 01:58:32.677000 audit[1896]: NETFILTER_CFG table=filter:36 family=10 entries=2 op=nft_register_chain pid=1896 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:58:32.677000 audit[1896]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc27ba6d80 a2=0 a3=7ffc27ba6d6c items=0 ppid=1862 pid=1896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:32.677000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 13 01:58:32.682000 audit[1894]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_chain pid=1894 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:58:32.682000 audit[1894]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd6e7f5cc0 a2=0 a3=2 items=0 ppid=1862 pid=1894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:32.682000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 13 01:58:32.733226 kubelet[1862]: I1213 01:58:32.733169 1862 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:58:32.733690 kubelet[1862]: E1213 01:58:32.733663 1862 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" Dec 13 01:58:32.778046 kubelet[1862]: E1213 01:58:32.777941 1862 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:58:32.838125 kubelet[1862]: E1213 01:58:32.838001 1862 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="400ms" Dec 13 01:58:32.935446 kubelet[1862]: I1213 01:58:32.935415 1862 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:58:32.935848 kubelet[1862]: E1213 01:58:32.935821 1862 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" Dec 13 01:58:32.979007 kubelet[1862]: E1213 01:58:32.978932 1862 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:58:33.239116 kubelet[1862]: E1213 01:58:33.238989 1862 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="800ms" Dec 13 01:58:33.337674 kubelet[1862]: I1213 01:58:33.337646 1862 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:58:33.337989 kubelet[1862]: E1213 01:58:33.337972 1862 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" Dec 13 01:58:33.348523 kubelet[1862]: I1213 01:58:33.348468 1862 policy_none.go:49] "None policy: Start" Dec 13 01:58:33.349330 kubelet[1862]: I1213 01:58:33.349313 1862 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:58:33.349379 kubelet[1862]: I1213 01:58:33.349337 1862 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:58:33.356188 kubelet[1862]: I1213 01:58:33.356153 1862 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:58:33.355000 audit[1862]: AVC avc: denied { mac_admin } for pid=1862 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:58:33.355000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 01:58:33.355000 audit[1862]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000dff9e0 a1=c00095b218 a2=c000dff9b0 a3=25 items=0 ppid=1 pid=1862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:33.355000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 01:58:33.356489 kubelet[1862]: I1213 01:58:33.356225 1862 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Dec 13 01:58:33.356489 kubelet[1862]: I1213 01:58:33.356373 1862 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:58:33.357702 kubelet[1862]: E1213 01:58:33.357682 1862 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 01:58:33.380036 kubelet[1862]: I1213 01:58:33.379964 1862 topology_manager.go:215] "Topology Admit Handler" podUID="b14b28a13f5ab170b638535a330b0d68" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 01:58:33.382537 kubelet[1862]: I1213 01:58:33.382509 1862 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 01:58:33.383360 kubelet[1862]: I1213 01:58:33.383311 1862 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 01:58:33.433906 kubelet[1862]: I1213 01:58:33.433859 1862 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:58:33.433906 kubelet[1862]: I1213 01:58:33.433913 1862 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:58:33.434116 kubelet[1862]: I1213 01:58:33.433933 1862 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b14b28a13f5ab170b638535a330b0d68-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b14b28a13f5ab170b638535a330b0d68\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:58:33.434116 kubelet[1862]: I1213 01:58:33.433949 1862 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b14b28a13f5ab170b638535a330b0d68-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b14b28a13f5ab170b638535a330b0d68\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:58:33.434116 kubelet[1862]: I1213 01:58:33.433972 1862 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b14b28a13f5ab170b638535a330b0d68-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b14b28a13f5ab170b638535a330b0d68\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:58:33.434116 kubelet[1862]: I1213 01:58:33.433989 1862 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:58:33.434116 kubelet[1862]: I1213 01:58:33.434023 1862 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:58:33.434305 kubelet[1862]: I1213 01:58:33.434062 1862 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:58:33.434305 kubelet[1862]: I1213 01:58:33.434098 1862 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:58:33.689353 kubelet[1862]: E1213 01:58:33.689243 1862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:33.689950 env[1308]: time="2024-12-13T01:58:33.689883982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b14b28a13f5ab170b638535a330b0d68,Namespace:kube-system,Attempt:0,}" Dec 13 01:58:33.690932 kubelet[1862]: E1213 01:58:33.690870 1862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:33.690932 kubelet[1862]: E1213 01:58:33.690885 1862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:33.691246 env[1308]: time="2024-12-13T01:58:33.691202527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,}" Dec 13 01:58:33.691353 env[1308]: time="2024-12-13T01:58:33.691322528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,}" Dec 13 01:58:33.707058 kubelet[1862]: W1213 01:58:33.707000 1862 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Dec 13 01:58:33.707058 kubelet[1862]: E1213 01:58:33.707050 1862 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.48:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Dec 13 01:58:33.796893 kubelet[1862]: W1213 01:58:33.796818 1862 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.48:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Dec 13 01:58:33.796893 kubelet[1862]: E1213 01:58:33.796886 1862 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.48:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Dec 13 01:58:33.912898 kubelet[1862]: W1213 01:58:33.912840 1862 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Dec 13 01:58:33.912898 kubelet[1862]: E1213 01:58:33.912902 1862 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.48:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Dec 13 01:58:34.040129 kubelet[1862]: E1213 01:58:34.039996 1862 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.48:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.48:6443: connect: connection refused" interval="1.6s" Dec 13 01:58:34.081936 kubelet[1862]: W1213 01:58:34.081860 1862 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Dec 13 01:58:34.081936 kubelet[1862]: E1213 01:58:34.081920 1862 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.48:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.48:6443: connect: connection refused Dec 13 01:58:34.139271 kubelet[1862]: I1213 01:58:34.139216 1862 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:58:34.139505 kubelet[1862]: E1213 01:58:34.139479 1862 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.48:6443/api/v1/nodes\": dial tcp 10.0.0.48:6443: connect: connection refused" node="localhost" Dec 13 01:58:34.423347 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2344870783.mount: Deactivated successfully. Dec 13 01:58:34.432038 env[1308]: time="2024-12-13T01:58:34.431946873Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:58:34.434844 env[1308]: time="2024-12-13T01:58:34.434806321Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:58:34.436611 env[1308]: time="2024-12-13T01:58:34.436576509Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:58:34.437953 env[1308]: time="2024-12-13T01:58:34.437902424Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:58:34.439308 env[1308]: time="2024-12-13T01:58:34.439275609Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:58:34.440299 env[1308]: time="2024-12-13T01:58:34.440273895Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:58:34.441443 env[1308]: time="2024-12-13T01:58:34.441418693Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:58:34.442842 env[1308]: time="2024-12-13T01:58:34.442806967Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:58:34.444599 env[1308]: time="2024-12-13T01:58:34.444564320Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:58:34.446238 env[1308]: time="2024-12-13T01:58:34.446207875Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:58:34.447104 env[1308]: time="2024-12-13T01:58:34.447071292Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:58:34.448747 env[1308]: time="2024-12-13T01:58:34.448719997Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:58:34.476626 env[1308]: time="2024-12-13T01:58:34.474933571Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:58:34.476626 env[1308]: time="2024-12-13T01:58:34.474977124Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:58:34.476626 env[1308]: time="2024-12-13T01:58:34.474990279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:58:34.476626 env[1308]: time="2024-12-13T01:58:34.475154395Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5d23c7e19930e1f3b19d71a85e50bb7f39bd447cbb010c8976e82b8fc9a2b221 pid=1904 runtime=io.containerd.runc.v2 Dec 13 01:58:34.485008 env[1308]: time="2024-12-13T01:58:34.484595545Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:58:34.485008 env[1308]: time="2024-12-13T01:58:34.484667894Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:58:34.485008 env[1308]: time="2024-12-13T01:58:34.484684166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:58:34.485008 env[1308]: time="2024-12-13T01:58:34.484881795Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/081be2f1939e617634deec057258349e3f7e9881bf82b147d923d7514a48586d pid=1917 runtime=io.containerd.runc.v2 Dec 13 01:58:34.493837 env[1308]: time="2024-12-13T01:58:34.493740629Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:58:34.493953 env[1308]: time="2024-12-13T01:58:34.493840430Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:58:34.493953 env[1308]: time="2024-12-13T01:58:34.493861932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:58:34.493999 env[1308]: time="2024-12-13T01:58:34.493980088Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7c804ee1d7e753e165de112da2df5bd3da9d9dcc22b13b2a04bd4d323360b717 pid=1933 runtime=io.containerd.runc.v2 Dec 13 01:58:34.671250 kubelet[1862]: E1213 01:58:34.671211 1862 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.48:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.48:6443: connect: connection refused Dec 13 01:58:34.763609 env[1308]: time="2024-12-13T01:58:34.763452992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d23c7e19930e1f3b19d71a85e50bb7f39bd447cbb010c8976e82b8fc9a2b221\"" Dec 13 01:58:34.764512 kubelet[1862]: E1213 01:58:34.764453 1862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:34.769381 env[1308]: time="2024-12-13T01:58:34.769321476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b14b28a13f5ab170b638535a330b0d68,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c804ee1d7e753e165de112da2df5bd3da9d9dcc22b13b2a04bd4d323360b717\"" Dec 13 01:58:34.770128 env[1308]: time="2024-12-13T01:58:34.770090171Z" level=info msg="CreateContainer within sandbox \"5d23c7e19930e1f3b19d71a85e50bb7f39bd447cbb010c8976e82b8fc9a2b221\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:58:34.771334 kubelet[1862]: E1213 01:58:34.771291 1862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:34.772097 env[1308]: time="2024-12-13T01:58:34.772064180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,} returns sandbox id \"081be2f1939e617634deec057258349e3f7e9881bf82b147d923d7514a48586d\"" Dec 13 01:58:34.774560 kubelet[1862]: E1213 01:58:34.774513 1862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:34.782014 env[1308]: time="2024-12-13T01:58:34.781966086Z" level=info msg="CreateContainer within sandbox \"7c804ee1d7e753e165de112da2df5bd3da9d9dcc22b13b2a04bd4d323360b717\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:58:34.784156 env[1308]: time="2024-12-13T01:58:34.783950314Z" level=info msg="CreateContainer within sandbox \"081be2f1939e617634deec057258349e3f7e9881bf82b147d923d7514a48586d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:58:34.794559 env[1308]: time="2024-12-13T01:58:34.794495364Z" level=info msg="CreateContainer within sandbox \"5d23c7e19930e1f3b19d71a85e50bb7f39bd447cbb010c8976e82b8fc9a2b221\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f213e0f3e5a5d16bee7eeba66d882b77e857af13ef2c85d50749051428595e2a\"" Dec 13 01:58:34.795791 env[1308]: time="2024-12-13T01:58:34.795743260Z" level=info msg="StartContainer for \"f213e0f3e5a5d16bee7eeba66d882b77e857af13ef2c85d50749051428595e2a\"" Dec 13 01:58:34.810403 env[1308]: time="2024-12-13T01:58:34.810353502Z" level=info msg="CreateContainer within sandbox \"081be2f1939e617634deec057258349e3f7e9881bf82b147d923d7514a48586d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"95b18e4deeaa1b3c5c60458c9913fe76463e9bd3e41954bdbb5830a5b70e4ace\"" Dec 13 01:58:34.811020 env[1308]: time="2024-12-13T01:58:34.810986277Z" level=info msg="StartContainer for \"95b18e4deeaa1b3c5c60458c9913fe76463e9bd3e41954bdbb5830a5b70e4ace\"" Dec 13 01:58:34.857934 env[1308]: time="2024-12-13T01:58:34.857889001Z" level=info msg="CreateContainer within sandbox \"7c804ee1d7e753e165de112da2df5bd3da9d9dcc22b13b2a04bd4d323360b717\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"763b4a63417c11867733b20b319b1d3371a2d11ec7d3be5678dc94c9ac494dfb\"" Dec 13 01:58:34.858759 env[1308]: time="2024-12-13T01:58:34.858730296Z" level=info msg="StartContainer for \"763b4a63417c11867733b20b319b1d3371a2d11ec7d3be5678dc94c9ac494dfb\"" Dec 13 01:58:34.869141 env[1308]: time="2024-12-13T01:58:34.869080432Z" level=info msg="StartContainer for \"f213e0f3e5a5d16bee7eeba66d882b77e857af13ef2c85d50749051428595e2a\" returns successfully" Dec 13 01:58:34.875600 env[1308]: time="2024-12-13T01:58:34.875550170Z" level=info msg="StartContainer for \"95b18e4deeaa1b3c5c60458c9913fe76463e9bd3e41954bdbb5830a5b70e4ace\" returns successfully" Dec 13 01:58:35.024384 env[1308]: time="2024-12-13T01:58:35.024070674Z" level=info msg="StartContainer for \"763b4a63417c11867733b20b319b1d3371a2d11ec7d3be5678dc94c9ac494dfb\" returns successfully" Dec 13 01:58:35.686607 kubelet[1862]: E1213 01:58:35.686490 1862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:35.690272 kubelet[1862]: E1213 01:58:35.690237 1862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:35.693979 kubelet[1862]: E1213 01:58:35.693933 1862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:35.741041 kubelet[1862]: I1213 01:58:35.741012 1862 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:58:36.314731 kubelet[1862]: E1213 01:58:36.314685 1862 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 13 01:58:36.383253 kubelet[1862]: I1213 01:58:36.383192 1862 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 01:58:36.639743 kubelet[1862]: I1213 01:58:36.639646 1862 apiserver.go:52] "Watching apiserver" Dec 13 01:58:36.698035 kubelet[1862]: E1213 01:58:36.697993 1862 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 13 01:58:36.698518 kubelet[1862]: E1213 01:58:36.698505 1862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:36.731512 kubelet[1862]: I1213 01:58:36.731475 1862 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:58:38.780684 kubelet[1862]: E1213 01:58:38.780639 1862 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:39.229714 systemd[1]: Reloading. Dec 13 01:58:39.292205 /usr/lib/systemd/system-generators/torcx-generator[2159]: time="2024-12-13T01:58:39Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 01:58:39.292271 /usr/lib/systemd/system-generators/torcx-generator[2159]: time="2024-12-13T01:58:39Z" level=info msg="torcx already run" Dec 13 01:58:39.352720 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 01:58:39.352737 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 01:58:39.369136 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:58:39.444359 systemd[1]: Stopping kubelet.service... Dec 13 01:58:39.444699 kubelet[1862]: I1213 01:58:39.444663 1862 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:58:39.466494 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:58:39.466913 systemd[1]: Stopped kubelet.service. Dec 13 01:58:39.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:39.467816 kernel: kauditd_printk_skb: 43 callbacks suppressed Dec 13 01:58:39.467884 kernel: audit: type=1131 audit(1734055119.466:219): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:39.469210 systemd[1]: Starting kubelet.service... Dec 13 01:58:39.626422 systemd[1]: Started kubelet.service. Dec 13 01:58:39.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:39.631852 kernel: audit: type=1130 audit(1734055119.625:220): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:39.682634 kubelet[2213]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:58:39.682634 kubelet[2213]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:58:39.682634 kubelet[2213]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:58:39.684476 kubelet[2213]: I1213 01:58:39.682652 2213 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:58:39.690833 kubelet[2213]: I1213 01:58:39.690747 2213 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:58:39.690833 kubelet[2213]: I1213 01:58:39.690830 2213 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:58:39.691237 kubelet[2213]: I1213 01:58:39.691212 2213 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:58:39.693118 kubelet[2213]: I1213 01:58:39.692950 2213 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:58:39.694809 kubelet[2213]: I1213 01:58:39.694786 2213 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:58:39.702114 kubelet[2213]: I1213 01:58:39.702073 2213 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:58:39.702650 kubelet[2213]: I1213 01:58:39.702624 2213 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:58:39.703206 kubelet[2213]: I1213 01:58:39.702908 2213 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:58:39.703206 kubelet[2213]: I1213 01:58:39.702944 2213 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:58:39.703206 kubelet[2213]: I1213 01:58:39.702958 2213 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:58:39.703206 kubelet[2213]: I1213 01:58:39.702990 2213 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:58:39.703206 kubelet[2213]: I1213 01:58:39.703089 2213 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:58:39.703206 kubelet[2213]: I1213 01:58:39.703114 2213 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:58:39.703206 kubelet[2213]: I1213 01:58:39.703174 2213 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:58:39.703450 kubelet[2213]: I1213 01:58:39.703198 2213 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:58:39.703977 kubelet[2213]: I1213 01:58:39.703949 2213 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 01:58:39.704165 kubelet[2213]: I1213 01:58:39.704138 2213 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:58:39.708184 kubelet[2213]: I1213 01:58:39.704678 2213 server.go:1256] "Started kubelet" Dec 13 01:58:39.708184 kubelet[2213]: I1213 01:58:39.705018 2213 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:58:39.709554 kubelet[2213]: I1213 01:58:39.709492 2213 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:58:39.716778 kernel: audit: type=1400 audit(1734055119.710:221): avc: denied { mac_admin } for pid=2213 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:58:39.716856 kernel: audit: type=1401 audit(1734055119.710:221): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 01:58:39.710000 audit[2213]: AVC avc: denied { mac_admin } for pid=2213 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:58:39.710000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 01:58:39.716951 kubelet[2213]: I1213 01:58:39.713533 2213 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:58:39.716951 kubelet[2213]: I1213 01:58:39.713805 2213 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:58:39.710000 audit[2213]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000ece180 a1=c000e9e438 a2=c000ece150 a3=25 items=0 ppid=1 pid=2213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:39.721784 kernel: audit: type=1300 audit(1734055119.710:221): arch=c000003e syscall=188 success=no exit=-22 a0=c000ece180 a1=c000e9e438 a2=c000ece150 a3=25 items=0 ppid=1 pid=2213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:39.710000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 01:58:39.726931 kernel: audit: type=1327 audit(1734055119.710:221): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 01:58:39.727153 kubelet[2213]: I1213 01:58:39.727113 2213 kubelet.go:1417] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Dec 13 01:58:39.726000 audit[2213]: AVC avc: denied { mac_admin } for pid=2213 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:58:39.730600 kubelet[2213]: I1213 01:58:39.727209 2213 kubelet.go:1421] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Dec 13 01:58:39.730600 kubelet[2213]: I1213 01:58:39.727265 2213 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:58:39.730600 kubelet[2213]: E1213 01:58:39.729585 2213 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:58:39.732513 kernel: audit: type=1400 audit(1734055119.726:222): avc: denied { mac_admin } for pid=2213 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:58:39.732557 kernel: audit: type=1401 audit(1734055119.726:222): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 01:58:39.726000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 01:58:39.726000 audit[2213]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000d3eec0 a1=c000c6a7c8 a2=c000bfff50 a3=25 items=0 ppid=1 pid=2213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:39.734975 kubelet[2213]: I1213 01:58:39.733910 2213 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:58:39.735536 kubelet[2213]: I1213 01:58:39.735514 2213 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:58:39.737008 kubelet[2213]: I1213 01:58:39.735693 2213 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:58:39.737544 kernel: audit: type=1300 audit(1734055119.726:222): arch=c000003e syscall=188 success=no exit=-22 a0=c000d3eec0 a1=c000c6a7c8 a2=c000bfff50 a3=25 items=0 ppid=1 pid=2213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:39.726000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 01:58:39.742444 kubelet[2213]: I1213 01:58:39.739017 2213 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:58:39.742444 kubelet[2213]: I1213 01:58:39.741799 2213 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:58:39.742444 kubelet[2213]: I1213 01:58:39.741811 2213 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:58:39.742806 kernel: audit: type=1327 audit(1734055119.726:222): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 01:58:39.762116 kubelet[2213]: I1213 01:58:39.762080 2213 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:58:39.764511 kubelet[2213]: I1213 01:58:39.764494 2213 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:58:39.764610 kubelet[2213]: I1213 01:58:39.764524 2213 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:58:39.764610 kubelet[2213]: I1213 01:58:39.764539 2213 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:58:39.764610 kubelet[2213]: E1213 01:58:39.764577 2213 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:58:39.789957 kubelet[2213]: I1213 01:58:39.789933 2213 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:58:39.790142 kubelet[2213]: I1213 01:58:39.790130 2213 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:58:39.790239 kubelet[2213]: I1213 01:58:39.790226 2213 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:58:39.790453 kubelet[2213]: I1213 01:58:39.790441 2213 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:58:39.790539 kubelet[2213]: I1213 01:58:39.790526 2213 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:58:39.790607 kubelet[2213]: I1213 01:58:39.790593 2213 policy_none.go:49] "None policy: Start" Dec 13 01:58:39.791106 kubelet[2213]: I1213 01:58:39.791095 2213 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:58:39.791203 kubelet[2213]: I1213 01:58:39.791192 2213 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:58:39.791448 kubelet[2213]: I1213 01:58:39.791436 2213 state_mem.go:75] "Updated machine memory state" Dec 13 01:58:39.792457 kubelet[2213]: I1213 01:58:39.792443 2213 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:58:39.791000 audit[2213]: AVC avc: denied { mac_admin } for pid=2213 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:58:39.791000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 01:58:39.791000 audit[2213]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00124be60 a1=c000e9ecf0 a2=c00124be30 a3=25 items=0 ppid=1 pid=2213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:39.791000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 01:58:39.792871 kubelet[2213]: I1213 01:58:39.792854 2213 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Dec 13 01:58:39.793106 kubelet[2213]: I1213 01:58:39.793094 2213 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:58:39.838896 kubelet[2213]: I1213 01:58:39.838873 2213 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:58:39.851228 kubelet[2213]: I1213 01:58:39.851200 2213 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Dec 13 01:58:39.851352 kubelet[2213]: I1213 01:58:39.851284 2213 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 01:58:39.864894 kubelet[2213]: I1213 01:58:39.864843 2213 topology_manager.go:215] "Topology Admit Handler" podUID="b14b28a13f5ab170b638535a330b0d68" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 01:58:39.865062 kubelet[2213]: I1213 01:58:39.864927 2213 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 01:58:39.865062 kubelet[2213]: I1213 01:58:39.864959 2213 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 01:58:39.872088 kubelet[2213]: E1213 01:58:39.872061 2213 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Dec 13 01:58:39.938452 kubelet[2213]: I1213 01:58:39.938329 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:58:39.938452 kubelet[2213]: I1213 01:58:39.938384 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:58:39.938642 kubelet[2213]: I1213 01:58:39.938468 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:58:39.938642 kubelet[2213]: I1213 01:58:39.938528 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:58:39.938642 kubelet[2213]: I1213 01:58:39.938555 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b14b28a13f5ab170b638535a330b0d68-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b14b28a13f5ab170b638535a330b0d68\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:58:39.938642 kubelet[2213]: I1213 01:58:39.938591 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b14b28a13f5ab170b638535a330b0d68-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b14b28a13f5ab170b638535a330b0d68\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:58:39.938642 kubelet[2213]: I1213 01:58:39.938617 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b14b28a13f5ab170b638535a330b0d68-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b14b28a13f5ab170b638535a330b0d68\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:58:39.938827 kubelet[2213]: I1213 01:58:39.938644 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:58:39.938827 kubelet[2213]: I1213 01:58:39.938682 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:58:40.172052 kubelet[2213]: E1213 01:58:40.172003 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:40.172360 kubelet[2213]: E1213 01:58:40.172336 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:40.172415 kubelet[2213]: E1213 01:58:40.172368 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:40.704433 kubelet[2213]: I1213 01:58:40.704353 2213 apiserver.go:52] "Watching apiserver" Dec 13 01:58:40.736616 kubelet[2213]: I1213 01:58:40.736572 2213 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:58:40.775559 kubelet[2213]: E1213 01:58:40.775515 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:40.830876 kubelet[2213]: E1213 01:58:40.830843 2213 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Dec 13 01:58:40.834789 kubelet[2213]: E1213 01:58:40.832087 2213 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 01:58:40.835057 kubelet[2213]: E1213 01:58:40.835038 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:40.837817 kubelet[2213]: E1213 01:58:40.836725 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:40.842810 kubelet[2213]: I1213 01:58:40.842746 2213 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.8426430329999999 podStartE2EDuration="1.842643033s" podCreationTimestamp="2024-12-13 01:58:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:58:40.833242923 +0000 UTC m=+1.201591160" watchObservedRunningTime="2024-12-13 01:58:40.842643033 +0000 UTC m=+1.210991280" Dec 13 01:58:40.842980 kubelet[2213]: I1213 01:58:40.842950 2213 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.842906154 podStartE2EDuration="1.842906154s" podCreationTimestamp="2024-12-13 01:58:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:58:40.842821933 +0000 UTC m=+1.211170200" watchObservedRunningTime="2024-12-13 01:58:40.842906154 +0000 UTC m=+1.211254391" Dec 13 01:58:40.851205 kubelet[2213]: I1213 01:58:40.851154 2213 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.851115524 podStartE2EDuration="2.851115524s" podCreationTimestamp="2024-12-13 01:58:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:58:40.850714781 +0000 UTC m=+1.219063018" watchObservedRunningTime="2024-12-13 01:58:40.851115524 +0000 UTC m=+1.219463751" Dec 13 01:58:41.774932 kubelet[2213]: E1213 01:58:41.774906 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:41.776124 kubelet[2213]: E1213 01:58:41.775732 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:42.776172 kubelet[2213]: E1213 01:58:42.776137 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:42.776576 kubelet[2213]: E1213 01:58:42.776417 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:43.776822 kubelet[2213]: E1213 01:58:43.776792 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:44.609820 sudo[1472]: pam_unix(sudo:session): session closed for user root Dec 13 01:58:44.609000 audit[1472]: USER_END pid=1472 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 01:58:44.610952 kernel: kauditd_printk_skb: 4 callbacks suppressed Dec 13 01:58:44.611000 kernel: audit: type=1106 audit(1734055124.609:224): pid=1472 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 01:58:44.611230 sshd[1466]: pam_unix(sshd:session): session closed for user core Dec 13 01:58:44.613486 systemd[1]: sshd@6-10.0.0.48:22-10.0.0.1:46014.service: Deactivated successfully. Dec 13 01:58:44.609000 audit[1472]: CRED_DISP pid=1472 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 01:58:44.614427 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:58:44.614500 systemd-logind[1291]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:58:44.615487 systemd-logind[1291]: Removed session 7. Dec 13 01:58:44.617939 kernel: audit: type=1104 audit(1734055124.609:225): pid=1472 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 01:58:44.617989 kernel: audit: type=1106 audit(1734055124.611:226): pid=1466 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:58:44.611000 audit[1466]: USER_END pid=1466 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:58:44.611000 audit[1466]: CRED_DISP pid=1466 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:58:44.625590 kernel: audit: type=1104 audit(1734055124.611:227): pid=1466 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:58:44.625635 kernel: audit: type=1131 audit(1734055124.612:228): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.48:22-10.0.0.1:46014 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:44.612000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.48:22-10.0.0.1:46014 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:58:46.095983 update_engine[1295]: I1213 01:58:46.095935 1295 update_attempter.cc:509] Updating boot flags... Dec 13 01:58:49.232445 kubelet[2213]: E1213 01:58:49.232360 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:49.784417 kubelet[2213]: E1213 01:58:49.784382 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:50.870934 kubelet[2213]: E1213 01:58:50.870891 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:52.307502 kubelet[2213]: I1213 01:58:52.307467 2213 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:58:52.307913 env[1308]: time="2024-12-13T01:58:52.307845489Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:58:52.308096 kubelet[2213]: I1213 01:58:52.308009 2213 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:58:52.361396 kubelet[2213]: E1213 01:58:52.361354 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:52.789929 kubelet[2213]: E1213 01:58:52.789890 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:53.144140 kubelet[2213]: I1213 01:58:53.144098 2213 topology_manager.go:215] "Topology Admit Handler" podUID="eb07927d-5adc-479d-80ca-cb6087fe19d6" podNamespace="kube-system" podName="kube-proxy-p29rv" Dec 13 01:58:53.244336 kubelet[2213]: I1213 01:58:53.244292 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eb07927d-5adc-479d-80ca-cb6087fe19d6-xtables-lock\") pod \"kube-proxy-p29rv\" (UID: \"eb07927d-5adc-479d-80ca-cb6087fe19d6\") " pod="kube-system/kube-proxy-p29rv" Dec 13 01:58:53.244336 kubelet[2213]: I1213 01:58:53.244339 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/eb07927d-5adc-479d-80ca-cb6087fe19d6-kube-proxy\") pod \"kube-proxy-p29rv\" (UID: \"eb07927d-5adc-479d-80ca-cb6087fe19d6\") " pod="kube-system/kube-proxy-p29rv" Dec 13 01:58:53.244529 kubelet[2213]: I1213 01:58:53.244369 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eb07927d-5adc-479d-80ca-cb6087fe19d6-lib-modules\") pod \"kube-proxy-p29rv\" (UID: \"eb07927d-5adc-479d-80ca-cb6087fe19d6\") " pod="kube-system/kube-proxy-p29rv" Dec 13 01:58:53.244529 kubelet[2213]: I1213 01:58:53.244397 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47ck8\" (UniqueName: \"kubernetes.io/projected/eb07927d-5adc-479d-80ca-cb6087fe19d6-kube-api-access-47ck8\") pod \"kube-proxy-p29rv\" (UID: \"eb07927d-5adc-479d-80ca-cb6087fe19d6\") " pod="kube-system/kube-proxy-p29rv" Dec 13 01:58:53.299906 kubelet[2213]: I1213 01:58:53.299871 2213 topology_manager.go:215] "Topology Admit Handler" podUID="e21f8890-65dc-43d0-a009-62eb987d5477" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-sktj5" Dec 13 01:58:53.344966 kubelet[2213]: I1213 01:58:53.344929 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e21f8890-65dc-43d0-a009-62eb987d5477-var-lib-calico\") pod \"tigera-operator-c7ccbd65-sktj5\" (UID: \"e21f8890-65dc-43d0-a009-62eb987d5477\") " pod="tigera-operator/tigera-operator-c7ccbd65-sktj5" Dec 13 01:58:53.345372 kubelet[2213]: I1213 01:58:53.345003 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k89xh\" (UniqueName: \"kubernetes.io/projected/e21f8890-65dc-43d0-a009-62eb987d5477-kube-api-access-k89xh\") pod \"tigera-operator-c7ccbd65-sktj5\" (UID: \"e21f8890-65dc-43d0-a009-62eb987d5477\") " pod="tigera-operator/tigera-operator-c7ccbd65-sktj5" Dec 13 01:58:53.448841 kubelet[2213]: E1213 01:58:53.448694 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:53.449358 env[1308]: time="2024-12-13T01:58:53.449319484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p29rv,Uid:eb07927d-5adc-479d-80ca-cb6087fe19d6,Namespace:kube-system,Attempt:0,}" Dec 13 01:58:53.471457 env[1308]: time="2024-12-13T01:58:53.471395520Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:58:53.471457 env[1308]: time="2024-12-13T01:58:53.471439594Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:58:53.471646 env[1308]: time="2024-12-13T01:58:53.471456355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:58:53.471646 env[1308]: time="2024-12-13T01:58:53.471601469Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a725d186f02e533865775f40559966616c01cbeb4c4494f28fcf09869855b399 pid=2331 runtime=io.containerd.runc.v2 Dec 13 01:58:53.502839 env[1308]: time="2024-12-13T01:58:53.502326787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p29rv,Uid:eb07927d-5adc-479d-80ca-cb6087fe19d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"a725d186f02e533865775f40559966616c01cbeb4c4494f28fcf09869855b399\"" Dec 13 01:58:53.503365 kubelet[2213]: E1213 01:58:53.503338 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:53.506845 env[1308]: time="2024-12-13T01:58:53.506814632Z" level=info msg="CreateContainer within sandbox \"a725d186f02e533865775f40559966616c01cbeb4c4494f28fcf09869855b399\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:58:53.524518 env[1308]: time="2024-12-13T01:58:53.524460762Z" level=info msg="CreateContainer within sandbox \"a725d186f02e533865775f40559966616c01cbeb4c4494f28fcf09869855b399\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f0656071569cb0732e5c6fb32b9bb86ca64b42f8a47319f4e562bd8da75d5435\"" Dec 13 01:58:53.525227 env[1308]: time="2024-12-13T01:58:53.525184057Z" level=info msg="StartContainer for \"f0656071569cb0732e5c6fb32b9bb86ca64b42f8a47319f4e562bd8da75d5435\"" Dec 13 01:58:53.605167 env[1308]: time="2024-12-13T01:58:53.605113747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-sktj5,Uid:e21f8890-65dc-43d0-a009-62eb987d5477,Namespace:tigera-operator,Attempt:0,}" Dec 13 01:58:53.619000 audit[2425]: NETFILTER_CFG table=mangle:38 family=10 entries=1 op=nft_register_chain pid=2425 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:58:53.619000 audit[2425]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff3ebd4600 a2=0 a3=7fff3ebd45ec items=0 ppid=2382 pid=2425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:53.628965 kernel: audit: type=1325 audit(1734055133.619:229): table=mangle:38 family=10 entries=1 op=nft_register_chain pid=2425 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:58:53.629089 kernel: audit: type=1300 audit(1734055133.619:229): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff3ebd4600 a2=0 a3=7fff3ebd45ec items=0 ppid=2382 pid=2425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:53.629122 kernel: audit: type=1327 audit(1734055133.619:229): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 13 01:58:53.619000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 13 01:58:53.619000 audit[2424]: NETFILTER_CFG table=mangle:39 family=2 entries=1 op=nft_register_chain pid=2424 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:58:53.633950 kernel: audit: type=1325 audit(1734055133.619:230): table=mangle:39 family=2 entries=1 op=nft_register_chain pid=2424 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:58:53.619000 audit[2424]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeb64f9c30 a2=0 a3=7ffeb64f9c1c items=0 ppid=2382 pid=2424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:53.639102 kernel: audit: type=1300 audit(1734055133.619:230): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeb64f9c30 a2=0 a3=7ffeb64f9c1c items=0 ppid=2382 pid=2424 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:53.639160 kernel: audit: type=1327 audit(1734055133.619:230): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 13 01:58:53.619000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 13 01:58:53.641514 kernel: audit: type=1325 audit(1734055133.627:231): table=nat:40 family=10 entries=1 op=nft_register_chain pid=2427 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:58:53.627000 audit[2427]: NETFILTER_CFG table=nat:40 family=10 entries=1 op=nft_register_chain pid=2427 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:58:53.643978 kernel: audit: type=1300 audit(1734055133.627:231): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd397454d0 a2=0 a3=7ffd397454bc items=0 ppid=2382 pid=2427 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:53.627000 audit[2427]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd397454d0 a2=0 a3=7ffd397454bc items=0 ppid=2382 pid=2427 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:53.647505 env[1308]: time="2024-12-13T01:58:53.647463986Z" level=info msg="StartContainer for \"f0656071569cb0732e5c6fb32b9bb86ca64b42f8a47319f4e562bd8da75d5435\" returns successfully" Dec 13 01:58:53.627000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 13 01:58:53.684641 kernel: audit: type=1327 audit(1734055133.627:231): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 13 01:58:53.684729 kernel: audit: type=1325 audit(1734055133.630:232): table=nat:41 family=2 entries=1 op=nft_register_chain pid=2428 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:58:53.630000 audit[2428]: NETFILTER_CFG table=nat:41 family=2 entries=1 op=nft_register_chain pid=2428 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:58:53.630000 audit[2428]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd62018f30 a2=0 a3=7ffd62018f1c items=0 ppid=2382 pid=2428 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:53.630000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 13 01:58:53.630000 audit[2429]: NETFILTER_CFG table=filter:42 family=10 entries=1 op=nft_register_chain pid=2429 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:58:53.630000 audit[2429]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff97e2bf60 a2=0 a3=7fff97e2bf4c items=0 ppid=2382 pid=2429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:53.630000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 13 01:58:53.631000 audit[2430]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2430 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:58:53.631000 audit[2430]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe8e2b2f70 a2=0 a3=7ffe8e2b2f5c items=0 ppid=2382 pid=2430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:53.631000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 13 01:58:53.726000 audit[2431]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2431 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:58:53.726000 audit[2431]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd3cbaddf0 a2=0 a3=7ffd3cbadddc items=0 ppid=2382 pid=2431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:53.726000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 13 01:58:53.729000 audit[2433]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2433 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:58:53.729000 audit[2433]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffde8508cd0 a2=0 a3=7ffde8508cbc items=0 ppid=2382 pid=2433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:53.729000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 13 01:58:53.732000 audit[2436]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2436 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:58:53.732000 audit[2436]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fffd67bfe90 a2=0 a3=7fffd67bfe7c items=0 ppid=2382 pid=2436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:53.732000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 13 01:58:53.733000 audit[2437]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2437 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:58:53.733000 audit[2437]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc3dd59920 a2=0 a3=7ffc3dd5990c items=0 ppid=2382 pid=2437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:53.733000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 13 01:58:53.735000 audit[2439]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2439 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:58:53.735000 audit[2439]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe81a69250 a2=0 a3=7ffe81a6923c items=0 ppid=2382 pid=2439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:53.735000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 13 01:58:53.736000 audit[2440]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2440 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:58:53.736000 audit[2440]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe9323e700 a2=0 a3=7ffe9323e6ec items=0 ppid=2382 pid=2440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:53.736000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 13 01:58:53.738000 audit[2442]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2442 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:58:53.738000 audit[2442]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc0734f580 a2=0 a3=7ffc0734f56c items=0 ppid=2382 pid=2442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:53.738000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 13 01:58:53.741000 audit[2445]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2445 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:58:53.741000 audit[2445]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe9aefcac0 a2=0 a3=7ffe9aefcaac items=0 ppid=2382 pid=2445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:53.741000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 13 01:58:53.742000 audit[2446]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2446 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:58:53.742000 audit[2446]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd09a31760 a2=0 a3=7ffd09a3174c items=0 ppid=2382 pid=2446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:53.742000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 13 01:58:53.745000 audit[2452]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2452 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:58:53.745000 audit[2452]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd4ebed6b0 a2=0 a3=7ffd4ebed69c items=0 ppid=2382 pid=2452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:53.745000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 13 01:58:53.746000 audit[2461]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2461 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:58:53.746000 audit[2461]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd68917030 a2=0 a3=7ffd6891701c items=0 ppid=2382 pid=2461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:53.746000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 13 01:58:53.748543 env[1308]: time="2024-12-13T01:58:53.748476436Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:58:53.748543 env[1308]: time="2024-12-13T01:58:53.748518936Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:58:53.748543 env[1308]: time="2024-12-13T01:58:53.748528484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:58:53.748743 env[1308]: time="2024-12-13T01:58:53.748713724Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a166cb86b0fc9ffe3f3c8a8bf7f6867ec7f3338c6f6e3510bb82e32d4a6ce549 pid=2457 runtime=io.containerd.runc.v2 Dec 13 01:58:53.748000 audit[2469]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2469 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:58:53.748000 audit[2469]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc3a0e71e0 a2=0 a3=7ffc3a0e71cc items=0 ppid=2382 pid=2469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:53.748000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 13 01:58:53.753000 audit[2478]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2478 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:58:53.753000 audit[2478]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffea7f96390 a2=0 a3=7ffea7f9637c items=0 ppid=2382 pid=2478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:53.753000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 13 01:58:53.757000 audit[2484]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2484 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:58:53.757000 audit[2484]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc6c614a40 a2=0 a3=7ffc6c614a2c items=0 ppid=2382 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:53.757000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 13 01:58:53.758000 audit[2485]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2485 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:58:53.758000 audit[2485]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd76452360 a2=0 a3=7ffd7645234c items=0 ppid=2382 pid=2485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:53.758000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 13 01:58:53.762000 audit[2487]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2487 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:58:53.762000 audit[2487]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7fff42ac8920 a2=0 a3=7fff42ac890c items=0 ppid=2382 pid=2487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:53.762000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 01:58:53.765000 audit[2492]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2492 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:58:53.765000 audit[2492]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd4e6615f0 a2=0 a3=7ffd4e6615dc items=0 ppid=2382 pid=2492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:53.765000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 01:58:53.766000 audit[2493]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2493 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:58:53.766000 audit[2493]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe782d9710 a2=0 a3=7ffe782d96fc items=0 ppid=2382 pid=2493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:53.766000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 13 01:58:53.768000 audit[2500]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2500 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 01:58:53.768000 audit[2500]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffed5020630 a2=0 a3=7ffed502061c items=0 ppid=2382 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:53.768000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 13 01:58:53.786000 audit[2506]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2506 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:58:53.786000 audit[2506]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7ffc21499d20 a2=0 a3=7ffc21499d0c items=0 ppid=2382 pid=2506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:53.786000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:58:53.794992 kubelet[2213]: E1213 01:58:53.793892 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:58:53.796000 audit[2506]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2506 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:58:53.796000 audit[2506]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffc21499d20 a2=0 a3=7ffc21499d0c items=0 ppid=2382 pid=2506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:53.796000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:58:53.798000 audit[2519]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2519 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:58:53.798000 audit[2519]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd17d68470 a2=0 a3=7ffd17d6845c items=0 ppid=2382 pid=2519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:53.798000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 13 01:58:53.801000 audit[2521]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2521 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:58:53.801000 audit[2521]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff3f737890 a2=0 a3=7fff3f73787c items=0 ppid=2382 pid=2521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:53.801000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 13 01:58:53.806107 env[1308]: time="2024-12-13T01:58:53.806046444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-sktj5,Uid:e21f8890-65dc-43d0-a009-62eb987d5477,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a166cb86b0fc9ffe3f3c8a8bf7f6867ec7f3338c6f6e3510bb82e32d4a6ce549\"" Dec 13 01:58:53.805000 audit[2524]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2524 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:58:53.805000 audit[2524]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe9d5ea860 a2=0 a3=7ffe9d5ea84c items=0 ppid=2382 pid=2524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:53.805000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 13 01:58:53.807000 audit[2525]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2525 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:58:53.807000 audit[2525]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffff23dbd70 a2=0 a3=7ffff23dbd5c items=0 ppid=2382 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:53.807000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 13 01:58:53.810831 env[1308]: time="2024-12-13T01:58:53.809622396Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 01:58:53.811000 audit[2527]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2527 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:58:53.811000 audit[2527]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd790467b0 a2=0 a3=7ffd7904679c items=0 ppid=2382 pid=2527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:53.811000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 13 01:58:53.812000 audit[2528]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2528 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:58:53.812000 audit[2528]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffefe3047b0 a2=0 a3=7ffefe30479c items=0 ppid=2382 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:53.812000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 13 01:58:53.814000 audit[2530]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2530 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:58:53.814000 audit[2530]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffeacdf8fe0 a2=0 a3=7ffeacdf8fcc items=0 ppid=2382 pid=2530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:53.814000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 13 01:58:53.817000 audit[2533]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2533 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:58:53.817000 audit[2533]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffcf8d142c0 a2=0 a3=7ffcf8d142ac items=0 ppid=2382 pid=2533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:53.817000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 13 01:58:53.818000 audit[2534]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2534 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:58:53.818000 audit[2534]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffec15e6fe0 a2=0 a3=7ffec15e6fcc items=0 ppid=2382 pid=2534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:53.818000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 13 01:58:53.821000 audit[2536]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2536 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:58:53.821000 audit[2536]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc2f5e2290 a2=0 a3=7ffc2f5e227c items=0 ppid=2382 pid=2536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:53.821000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 13 01:58:53.822000 audit[2537]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2537 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:58:53.822000 audit[2537]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc0a2e5070 a2=0 a3=7ffc0a2e505c items=0 ppid=2382 pid=2537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:53.822000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 13 01:58:53.824000 audit[2539]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2539 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:58:53.824000 audit[2539]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff44711d80 a2=0 a3=7fff44711d6c items=0 ppid=2382 pid=2539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:53.824000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 13 01:58:53.827000 audit[2542]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2542 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:58:53.827000 audit[2542]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc77ffdac0 a2=0 a3=7ffc77ffdaac items=0 ppid=2382 pid=2542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:53.827000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 13 01:58:53.830000 audit[2545]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2545 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:58:53.830000 audit[2545]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd24440d00 a2=0 a3=7ffd24440cec items=0 ppid=2382 pid=2545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:53.830000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 13 01:58:53.830000 audit[2546]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2546 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:58:53.830000 audit[2546]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff9ebf3d20 a2=0 a3=7fff9ebf3d0c items=0 ppid=2382 pid=2546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:53.830000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 13 01:58:53.832000 audit[2548]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2548 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:58:53.832000 audit[2548]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7fffb513e350 a2=0 a3=7fffb513e33c items=0 ppid=2382 pid=2548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:53.832000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 01:58:53.835000 audit[2551]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2551 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:58:53.835000 audit[2551]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffea29e0d40 a2=0 a3=7ffea29e0d2c items=0 ppid=2382 pid=2551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:53.835000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 01:58:53.836000 audit[2552]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2552 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:58:53.836000 audit[2552]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffecf2fe700 a2=0 a3=7ffecf2fe6ec items=0 ppid=2382 pid=2552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:53.836000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 13 01:58:53.838000 audit[2554]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2554 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:58:53.838000 audit[2554]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffefb774ef0 a2=0 a3=7ffefb774edc items=0 ppid=2382 pid=2554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:53.838000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 13 01:58:53.839000 audit[2555]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2555 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:58:53.839000 audit[2555]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff0d22e480 a2=0 a3=7fff0d22e46c items=0 ppid=2382 pid=2555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:53.839000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 13 01:58:53.841000 audit[2557]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2557 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:58:53.841000 audit[2557]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffdc181ecc0 a2=0 a3=7ffdc181ecac items=0 ppid=2382 pid=2557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:53.841000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 01:58:53.843000 audit[2560]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2560 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 01:58:53.843000 audit[2560]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd70196590 a2=0 a3=7ffd7019657c items=0 ppid=2382 pid=2560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:53.843000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 01:58:53.845000 audit[2562]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2562 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 13 01:58:53.845000 audit[2562]: SYSCALL arch=c000003e syscall=46 success=yes exit=2004 a0=3 a1=7fff5163fd30 a2=0 a3=7fff5163fd1c items=0 ppid=2382 pid=2562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:53.845000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:58:53.846000 audit[2562]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2562 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 13 01:58:53.846000 audit[2562]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7fff5163fd30 a2=0 a3=7fff5163fd1c items=0 ppid=2382 pid=2562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:58:53.846000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:58:55.284739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1354168863.mount: Deactivated successfully. Dec 13 01:58:57.211615 env[1308]: time="2024-12-13T01:58:57.211549008Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.36.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:58:57.213304 env[1308]: time="2024-12-13T01:58:57.213269221Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:58:57.214889 env[1308]: time="2024-12-13T01:58:57.214842858Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.36.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:58:57.216242 env[1308]: time="2024-12-13T01:58:57.216209534Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:58:57.216722 env[1308]: time="2024-12-13T01:58:57.216702734Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Dec 13 01:58:57.218096 env[1308]: time="2024-12-13T01:58:57.218063279Z" level=info msg="CreateContainer within sandbox \"a166cb86b0fc9ffe3f3c8a8bf7f6867ec7f3338c6f6e3510bb82e32d4a6ce549\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 01:58:57.228044 env[1308]: time="2024-12-13T01:58:57.228003570Z" level=info msg="CreateContainer within sandbox \"a166cb86b0fc9ffe3f3c8a8bf7f6867ec7f3338c6f6e3510bb82e32d4a6ce549\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c1a1bb37e457ccafdfe35c5f7eb4935887064c8c321189d183e4370a764e225a\"" Dec 13 01:58:57.228465 env[1308]: time="2024-12-13T01:58:57.228434904Z" level=info msg="StartContainer for \"c1a1bb37e457ccafdfe35c5f7eb4935887064c8c321189d183e4370a764e225a\"" Dec 13 01:58:57.267604 env[1308]: time="2024-12-13T01:58:57.267566620Z" level=info msg="StartContainer for \"c1a1bb37e457ccafdfe35c5f7eb4935887064c8c321189d183e4370a764e225a\" returns successfully" Dec 13 01:58:57.811502 kubelet[2213]: I1213 01:58:57.811439 2213 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-p29rv" podStartSLOduration=4.811381584 podStartE2EDuration="4.811381584s" podCreationTimestamp="2024-12-13 01:58:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:58:53.808680367 +0000 UTC m=+14.177028604" watchObservedRunningTime="2024-12-13 01:58:57.811381584 +0000 UTC m=+18.179729841" Dec 13 01:59:00.040000 audit[2603]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2603 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:59:00.050072 kernel: kauditd_printk_skb: 143 callbacks suppressed Dec 13 01:59:00.050224 kernel: audit: type=1325 audit(1734055140.040:280): table=filter:89 family=2 entries=15 op=nft_register_rule pid=2603 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:59:00.050249 kernel: audit: type=1300 audit(1734055140.040:280): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fff0e358bf0 a2=0 a3=7fff0e358bdc items=0 ppid=2382 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:00.040000 audit[2603]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fff0e358bf0 a2=0 a3=7fff0e358bdc items=0 ppid=2382 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:00.040000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:59:00.052787 kernel: audit: type=1327 audit(1734055140.040:280): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:59:00.054000 audit[2603]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2603 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:59:00.054000 audit[2603]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff0e358bf0 a2=0 a3=0 items=0 ppid=2382 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:00.062725 kernel: audit: type=1325 audit(1734055140.054:281): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2603 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:59:00.062787 kernel: audit: type=1300 audit(1734055140.054:281): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff0e358bf0 a2=0 a3=0 items=0 ppid=2382 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:00.065148 kernel: audit: type=1327 audit(1734055140.054:281): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:59:00.054000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:59:00.066000 audit[2605]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2605 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:59:00.066000 audit[2605]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fffafb6c7e0 a2=0 a3=7fffafb6c7cc items=0 ppid=2382 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:00.075267 kernel: audit: type=1325 audit(1734055140.066:282): table=filter:91 family=2 entries=16 op=nft_register_rule pid=2605 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:59:00.075349 kernel: audit: type=1300 audit(1734055140.066:282): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fffafb6c7e0 a2=0 a3=7fffafb6c7cc items=0 ppid=2382 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:00.075373 kernel: audit: type=1327 audit(1734055140.066:282): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:59:00.066000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:59:00.074000 audit[2605]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2605 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:59:00.074000 audit[2605]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffafb6c7e0 a2=0 a3=0 items=0 ppid=2382 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:00.074000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:59:00.080789 kernel: audit: type=1325 audit(1734055140.074:283): table=nat:92 family=2 entries=12 op=nft_register_rule pid=2605 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:59:00.171148 kubelet[2213]: I1213 01:59:00.171100 2213 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-sktj5" podStartSLOduration=3.7631439650000003 podStartE2EDuration="7.171050002s" podCreationTimestamp="2024-12-13 01:58:53 +0000 UTC" firstStartedPulling="2024-12-13 01:58:53.809061056 +0000 UTC m=+14.177409293" lastFinishedPulling="2024-12-13 01:58:57.216967083 +0000 UTC m=+17.585315330" observedRunningTime="2024-12-13 01:58:57.811795053 +0000 UTC m=+18.180143320" watchObservedRunningTime="2024-12-13 01:59:00.171050002 +0000 UTC m=+20.539398239" Dec 13 01:59:00.171601 kubelet[2213]: I1213 01:59:00.171239 2213 topology_manager.go:215] "Topology Admit Handler" podUID="1316e06d-08d8-49e4-8346-f8221583c6fc" podNamespace="calico-system" podName="calico-typha-8449dcb8f5-nmwjd" Dec 13 01:59:00.193913 kubelet[2213]: I1213 01:59:00.193858 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1316e06d-08d8-49e4-8346-f8221583c6fc-tigera-ca-bundle\") pod \"calico-typha-8449dcb8f5-nmwjd\" (UID: \"1316e06d-08d8-49e4-8346-f8221583c6fc\") " pod="calico-system/calico-typha-8449dcb8f5-nmwjd" Dec 13 01:59:00.194236 kubelet[2213]: I1213 01:59:00.194186 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgrkr\" (UniqueName: \"kubernetes.io/projected/1316e06d-08d8-49e4-8346-f8221583c6fc-kube-api-access-rgrkr\") pod \"calico-typha-8449dcb8f5-nmwjd\" (UID: \"1316e06d-08d8-49e4-8346-f8221583c6fc\") " pod="calico-system/calico-typha-8449dcb8f5-nmwjd" Dec 13 01:59:00.194456 kubelet[2213]: I1213 01:59:00.194257 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1316e06d-08d8-49e4-8346-f8221583c6fc-typha-certs\") pod \"calico-typha-8449dcb8f5-nmwjd\" (UID: \"1316e06d-08d8-49e4-8346-f8221583c6fc\") " pod="calico-system/calico-typha-8449dcb8f5-nmwjd" Dec 13 01:59:00.447451 kubelet[2213]: I1213 01:59:00.447374 2213 topology_manager.go:215] "Topology Admit Handler" podUID="d7094e51-f609-4410-acd5-98b3dbcfdb7f" podNamespace="calico-system" podName="calico-node-s8cbw" Dec 13 01:59:00.473828 kubelet[2213]: E1213 01:59:00.473792 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:59:00.474262 env[1308]: time="2024-12-13T01:59:00.474212673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8449dcb8f5-nmwjd,Uid:1316e06d-08d8-49e4-8346-f8221583c6fc,Namespace:calico-system,Attempt:0,}" Dec 13 01:59:00.495753 kubelet[2213]: I1213 01:59:00.495719 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d7094e51-f609-4410-acd5-98b3dbcfdb7f-var-run-calico\") pod \"calico-node-s8cbw\" (UID: \"d7094e51-f609-4410-acd5-98b3dbcfdb7f\") " pod="calico-system/calico-node-s8cbw" Dec 13 01:59:00.495978 kubelet[2213]: I1213 01:59:00.495964 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d7094e51-f609-4410-acd5-98b3dbcfdb7f-cni-net-dir\") pod \"calico-node-s8cbw\" (UID: \"d7094e51-f609-4410-acd5-98b3dbcfdb7f\") " pod="calico-system/calico-node-s8cbw" Dec 13 01:59:00.496089 kubelet[2213]: I1213 01:59:00.496074 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d7094e51-f609-4410-acd5-98b3dbcfdb7f-flexvol-driver-host\") pod \"calico-node-s8cbw\" (UID: \"d7094e51-f609-4410-acd5-98b3dbcfdb7f\") " pod="calico-system/calico-node-s8cbw" Dec 13 01:59:00.496182 kubelet[2213]: I1213 01:59:00.496168 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9npf\" (UniqueName: \"kubernetes.io/projected/d7094e51-f609-4410-acd5-98b3dbcfdb7f-kube-api-access-m9npf\") pod \"calico-node-s8cbw\" (UID: \"d7094e51-f609-4410-acd5-98b3dbcfdb7f\") " pod="calico-system/calico-node-s8cbw" Dec 13 01:59:00.496285 kubelet[2213]: I1213 01:59:00.496270 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d7094e51-f609-4410-acd5-98b3dbcfdb7f-policysync\") pod \"calico-node-s8cbw\" (UID: \"d7094e51-f609-4410-acd5-98b3dbcfdb7f\") " pod="calico-system/calico-node-s8cbw" Dec 13 01:59:00.496386 kubelet[2213]: I1213 01:59:00.496372 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d7094e51-f609-4410-acd5-98b3dbcfdb7f-node-certs\") pod \"calico-node-s8cbw\" (UID: \"d7094e51-f609-4410-acd5-98b3dbcfdb7f\") " pod="calico-system/calico-node-s8cbw" Dec 13 01:59:00.496484 kubelet[2213]: I1213 01:59:00.496468 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d7094e51-f609-4410-acd5-98b3dbcfdb7f-lib-modules\") pod \"calico-node-s8cbw\" (UID: \"d7094e51-f609-4410-acd5-98b3dbcfdb7f\") " pod="calico-system/calico-node-s8cbw" Dec 13 01:59:00.496583 kubelet[2213]: I1213 01:59:00.496568 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d7094e51-f609-4410-acd5-98b3dbcfdb7f-cni-log-dir\") pod \"calico-node-s8cbw\" (UID: \"d7094e51-f609-4410-acd5-98b3dbcfdb7f\") " pod="calico-system/calico-node-s8cbw" Dec 13 01:59:00.496785 kubelet[2213]: I1213 01:59:00.496736 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d7094e51-f609-4410-acd5-98b3dbcfdb7f-var-lib-calico\") pod \"calico-node-s8cbw\" (UID: \"d7094e51-f609-4410-acd5-98b3dbcfdb7f\") " pod="calico-system/calico-node-s8cbw" Dec 13 01:59:00.496949 kubelet[2213]: I1213 01:59:00.496805 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7094e51-f609-4410-acd5-98b3dbcfdb7f-tigera-ca-bundle\") pod \"calico-node-s8cbw\" (UID: \"d7094e51-f609-4410-acd5-98b3dbcfdb7f\") " pod="calico-system/calico-node-s8cbw" Dec 13 01:59:00.496949 kubelet[2213]: I1213 01:59:00.496824 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d7094e51-f609-4410-acd5-98b3dbcfdb7f-cni-bin-dir\") pod \"calico-node-s8cbw\" (UID: \"d7094e51-f609-4410-acd5-98b3dbcfdb7f\") " pod="calico-system/calico-node-s8cbw" Dec 13 01:59:00.496949 kubelet[2213]: I1213 01:59:00.496849 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d7094e51-f609-4410-acd5-98b3dbcfdb7f-xtables-lock\") pod \"calico-node-s8cbw\" (UID: \"d7094e51-f609-4410-acd5-98b3dbcfdb7f\") " pod="calico-system/calico-node-s8cbw" Dec 13 01:59:00.559638 env[1308]: time="2024-12-13T01:59:00.557925593Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:59:00.559638 env[1308]: time="2024-12-13T01:59:00.557959206Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:59:00.559638 env[1308]: time="2024-12-13T01:59:00.557971379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:59:00.559638 env[1308]: time="2024-12-13T01:59:00.558078972Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/33797ec2724728fc872391aafa0a6500f9b925448c0f3f468028778fb31d1d2b pid=2613 runtime=io.containerd.runc.v2 Dec 13 01:59:00.577005 kubelet[2213]: I1213 01:59:00.576492 2213 topology_manager.go:215] "Topology Admit Handler" podUID="7937f569-a24d-4eec-b55c-c7674aa42251" podNamespace="calico-system" podName="csi-node-driver-z69kl" Dec 13 01:59:00.577005 kubelet[2213]: E1213 01:59:00.576749 2213 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z69kl" podUID="7937f569-a24d-4eec-b55c-c7674aa42251" Dec 13 01:59:00.599104 kubelet[2213]: I1213 01:59:00.599067 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7937f569-a24d-4eec-b55c-c7674aa42251-varrun\") pod \"csi-node-driver-z69kl\" (UID: \"7937f569-a24d-4eec-b55c-c7674aa42251\") " pod="calico-system/csi-node-driver-z69kl" Dec 13 01:59:00.599315 kubelet[2213]: I1213 01:59:00.599115 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7937f569-a24d-4eec-b55c-c7674aa42251-kubelet-dir\") pod \"csi-node-driver-z69kl\" (UID: \"7937f569-a24d-4eec-b55c-c7674aa42251\") " pod="calico-system/csi-node-driver-z69kl" Dec 13 01:59:00.599315 kubelet[2213]: I1213 01:59:00.599163 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7937f569-a24d-4eec-b55c-c7674aa42251-socket-dir\") pod \"csi-node-driver-z69kl\" (UID: \"7937f569-a24d-4eec-b55c-c7674aa42251\") " pod="calico-system/csi-node-driver-z69kl" Dec 13 01:59:00.599315 kubelet[2213]: I1213 01:59:00.599252 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2zfd\" (UniqueName: \"kubernetes.io/projected/7937f569-a24d-4eec-b55c-c7674aa42251-kube-api-access-m2zfd\") pod \"csi-node-driver-z69kl\" (UID: \"7937f569-a24d-4eec-b55c-c7674aa42251\") " pod="calico-system/csi-node-driver-z69kl" Dec 13 01:59:00.599407 kubelet[2213]: I1213 01:59:00.599331 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7937f569-a24d-4eec-b55c-c7674aa42251-registration-dir\") pod \"csi-node-driver-z69kl\" (UID: \"7937f569-a24d-4eec-b55c-c7674aa42251\") " pod="calico-system/csi-node-driver-z69kl" Dec 13 01:59:00.601856 kubelet[2213]: E1213 01:59:00.601322 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:00.601856 kubelet[2213]: W1213 01:59:00.601348 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:00.601856 kubelet[2213]: E1213 01:59:00.601373 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:00.605143 kubelet[2213]: E1213 01:59:00.604917 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:00.605143 kubelet[2213]: W1213 01:59:00.604950 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:00.605143 kubelet[2213]: E1213 01:59:00.604992 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:00.609840 kubelet[2213]: E1213 01:59:00.609613 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:00.609840 kubelet[2213]: W1213 01:59:00.609628 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:00.609840 kubelet[2213]: E1213 01:59:00.609645 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:00.615300 kubelet[2213]: E1213 01:59:00.615284 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:00.615406 kubelet[2213]: W1213 01:59:00.615390 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:00.615619 kubelet[2213]: E1213 01:59:00.615583 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:00.615737 kubelet[2213]: E1213 01:59:00.615724 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:00.615855 kubelet[2213]: W1213 01:59:00.615839 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:00.615967 kubelet[2213]: E1213 01:59:00.615940 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:00.616267 kubelet[2213]: E1213 01:59:00.616253 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:00.616356 kubelet[2213]: W1213 01:59:00.616340 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:00.616481 kubelet[2213]: E1213 01:59:00.616460 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:00.618901 kubelet[2213]: E1213 01:59:00.618887 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:00.618999 kubelet[2213]: W1213 01:59:00.618983 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:00.619178 kubelet[2213]: E1213 01:59:00.619166 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:00.619320 kubelet[2213]: E1213 01:59:00.619294 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:00.619320 kubelet[2213]: W1213 01:59:00.619313 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:00.619425 kubelet[2213]: E1213 01:59:00.619331 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:00.619734 kubelet[2213]: E1213 01:59:00.619713 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:00.619734 kubelet[2213]: W1213 01:59:00.619725 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:00.619734 kubelet[2213]: E1213 01:59:00.619736 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:00.619961 kubelet[2213]: E1213 01:59:00.619940 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:00.619961 kubelet[2213]: W1213 01:59:00.619952 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:00.619961 kubelet[2213]: E1213 01:59:00.619961 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:00.637873 env[1308]: time="2024-12-13T01:59:00.637748276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8449dcb8f5-nmwjd,Uid:1316e06d-08d8-49e4-8346-f8221583c6fc,Namespace:calico-system,Attempt:0,} returns sandbox id \"33797ec2724728fc872391aafa0a6500f9b925448c0f3f468028778fb31d1d2b\"" Dec 13 01:59:00.638678 kubelet[2213]: E1213 01:59:00.638651 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:59:00.640220 env[1308]: time="2024-12-13T01:59:00.640188753Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 01:59:00.700792 kubelet[2213]: E1213 01:59:00.700681 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:00.700792 kubelet[2213]: W1213 01:59:00.700709 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:00.700792 kubelet[2213]: E1213 01:59:00.700728 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:00.701001 kubelet[2213]: E1213 01:59:00.700926 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:00.701001 kubelet[2213]: W1213 01:59:00.700934 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:00.701001 kubelet[2213]: E1213 01:59:00.700945 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:00.701359 kubelet[2213]: E1213 01:59:00.701296 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:00.701359 kubelet[2213]: W1213 01:59:00.701310 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:00.701359 kubelet[2213]: E1213 01:59:00.701337 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:00.701735 kubelet[2213]: E1213 01:59:00.701718 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:00.701735 kubelet[2213]: W1213 01:59:00.701730 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:00.701735 kubelet[2213]: E1213 01:59:00.701757 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:00.702225 kubelet[2213]: E1213 01:59:00.702171 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:00.702225 kubelet[2213]: W1213 01:59:00.702184 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:00.702225 kubelet[2213]: E1213 01:59:00.702213 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:00.702449 kubelet[2213]: E1213 01:59:00.702436 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:00.702449 kubelet[2213]: W1213 01:59:00.702447 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:00.702520 kubelet[2213]: E1213 01:59:00.702460 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:00.702646 kubelet[2213]: E1213 01:59:00.702618 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:00.702646 kubelet[2213]: W1213 01:59:00.702631 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:00.702646 kubelet[2213]: E1213 01:59:00.702645 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:00.702835 kubelet[2213]: E1213 01:59:00.702824 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:00.702835 kubelet[2213]: W1213 01:59:00.702833 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:00.702916 kubelet[2213]: E1213 01:59:00.702865 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:00.703041 kubelet[2213]: E1213 01:59:00.703011 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:00.703041 kubelet[2213]: W1213 01:59:00.703034 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:00.703147 kubelet[2213]: E1213 01:59:00.703114 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:00.703233 kubelet[2213]: E1213 01:59:00.703221 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:00.703233 kubelet[2213]: W1213 01:59:00.703231 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:00.703318 kubelet[2213]: E1213 01:59:00.703265 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:00.703399 kubelet[2213]: E1213 01:59:00.703385 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:00.703399 kubelet[2213]: W1213 01:59:00.703395 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:00.703483 kubelet[2213]: E1213 01:59:00.703432 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:00.703600 kubelet[2213]: E1213 01:59:00.703586 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:00.703600 kubelet[2213]: W1213 01:59:00.703597 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:00.703678 kubelet[2213]: E1213 01:59:00.703629 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:00.703840 kubelet[2213]: E1213 01:59:00.703825 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:00.703840 kubelet[2213]: W1213 01:59:00.703835 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:00.704003 kubelet[2213]: E1213 01:59:00.703852 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:00.704169 kubelet[2213]: E1213 01:59:00.704153 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:00.704169 kubelet[2213]: W1213 01:59:00.704169 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:00.704251 kubelet[2213]: E1213 01:59:00.704189 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:00.704511 kubelet[2213]: E1213 01:59:00.704361 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:00.704511 kubelet[2213]: W1213 01:59:00.704373 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:00.704511 kubelet[2213]: E1213 01:59:00.704451 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:00.704615 kubelet[2213]: E1213 01:59:00.704521 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:00.704615 kubelet[2213]: W1213 01:59:00.704528 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:00.704615 kubelet[2213]: E1213 01:59:00.704607 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:00.705000 kubelet[2213]: E1213 01:59:00.704693 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:00.705000 kubelet[2213]: W1213 01:59:00.704704 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:00.705000 kubelet[2213]: E1213 01:59:00.704799 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:00.705000 kubelet[2213]: E1213 01:59:00.704900 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:00.705000 kubelet[2213]: W1213 01:59:00.704910 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:00.705156 kubelet[2213]: E1213 01:59:00.705019 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:00.705156 kubelet[2213]: E1213 01:59:00.705108 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:00.705156 kubelet[2213]: W1213 01:59:00.705116 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:00.705156 kubelet[2213]: E1213 01:59:00.705131 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:00.705323 kubelet[2213]: E1213 01:59:00.705306 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:00.705323 kubelet[2213]: W1213 01:59:00.705319 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:00.705423 kubelet[2213]: E1213 01:59:00.705334 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:00.706232 kubelet[2213]: E1213 01:59:00.705520 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:00.706232 kubelet[2213]: W1213 01:59:00.705534 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:00.706232 kubelet[2213]: E1213 01:59:00.705581 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:00.706232 kubelet[2213]: E1213 01:59:00.705759 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:00.706232 kubelet[2213]: W1213 01:59:00.705777 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:00.706232 kubelet[2213]: E1213 01:59:00.705794 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:00.706232 kubelet[2213]: E1213 01:59:00.705979 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:00.706232 kubelet[2213]: W1213 01:59:00.705989 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:00.706232 kubelet[2213]: E1213 01:59:00.706006 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:00.706232 kubelet[2213]: E1213 01:59:00.706195 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:00.706500 kubelet[2213]: W1213 01:59:00.706204 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:00.706500 kubelet[2213]: E1213 01:59:00.706216 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:00.711434 kubelet[2213]: E1213 01:59:00.711409 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:00.711434 kubelet[2213]: W1213 01:59:00.711429 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:00.711535 kubelet[2213]: E1213 01:59:00.711453 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:00.750838 kubelet[2213]: E1213 01:59:00.750806 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:59:00.751405 env[1308]: time="2024-12-13T01:59:00.751364561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-s8cbw,Uid:d7094e51-f609-4410-acd5-98b3dbcfdb7f,Namespace:calico-system,Attempt:0,}" Dec 13 01:59:00.803247 kubelet[2213]: E1213 01:59:00.803215 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:00.803247 kubelet[2213]: W1213 01:59:00.803230 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:00.803247 kubelet[2213]: E1213 01:59:00.803246 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:00.871323 kubelet[2213]: E1213 01:59:00.871265 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:00.871323 kubelet[2213]: W1213 01:59:00.871293 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:00.871323 kubelet[2213]: E1213 01:59:00.871317 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:00.893290 env[1308]: time="2024-12-13T01:59:00.893209096Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:59:00.893290 env[1308]: time="2024-12-13T01:59:00.893256746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:59:00.893290 env[1308]: time="2024-12-13T01:59:00.893269581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:59:00.893555 env[1308]: time="2024-12-13T01:59:00.893446533Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f199fda15efdc091c42141a32ef101f9b5c9f41240cc4a2986e9249b5391bfca pid=2693 runtime=io.containerd.runc.v2 Dec 13 01:59:00.933079 env[1308]: time="2024-12-13T01:59:00.933010746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-s8cbw,Uid:d7094e51-f609-4410-acd5-98b3dbcfdb7f,Namespace:calico-system,Attempt:0,} returns sandbox id \"f199fda15efdc091c42141a32ef101f9b5c9f41240cc4a2986e9249b5391bfca\"" Dec 13 01:59:00.934064 kubelet[2213]: E1213 01:59:00.933642 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:59:01.083000 audit[2729]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=2729 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:59:01.083000 audit[2729]: SYSCALL arch=c000003e syscall=46 success=yes exit=6652 a0=3 a1=7ffc1f8bf7e0 a2=0 a3=7ffc1f8bf7cc items=0 ppid=2382 pid=2729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:01.083000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:59:01.089000 audit[2729]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2729 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:59:01.089000 audit[2729]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc1f8bf7e0 a2=0 a3=0 items=0 ppid=2382 pid=2729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:01.089000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:59:01.765105 kubelet[2213]: E1213 01:59:01.765068 2213 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z69kl" podUID="7937f569-a24d-4eec-b55c-c7674aa42251" Dec 13 01:59:02.184028 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount852917550.mount: Deactivated successfully. Dec 13 01:59:02.884462 env[1308]: time="2024-12-13T01:59:02.884413335Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:59:02.886225 env[1308]: time="2024-12-13T01:59:02.886191192Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:59:02.887681 env[1308]: time="2024-12-13T01:59:02.887652934Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:59:02.889075 env[1308]: time="2024-12-13T01:59:02.889047279Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:59:02.889478 env[1308]: time="2024-12-13T01:59:02.889457872Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Dec 13 01:59:02.890194 env[1308]: time="2024-12-13T01:59:02.890065045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 01:59:02.900169 env[1308]: time="2024-12-13T01:59:02.900121307Z" level=info msg="CreateContainer within sandbox \"33797ec2724728fc872391aafa0a6500f9b925448c0f3f468028778fb31d1d2b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 01:59:02.916252 env[1308]: time="2024-12-13T01:59:02.916201539Z" level=info msg="CreateContainer within sandbox \"33797ec2724728fc872391aafa0a6500f9b925448c0f3f468028778fb31d1d2b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"73652334a3b183ba40794fd9a7b55729d6cb4c7dcd9bf79c76e84123446df0b6\"" Dec 13 01:59:02.916757 env[1308]: time="2024-12-13T01:59:02.916727799Z" level=info msg="StartContainer for \"73652334a3b183ba40794fd9a7b55729d6cb4c7dcd9bf79c76e84123446df0b6\"" Dec 13 01:59:02.968939 env[1308]: time="2024-12-13T01:59:02.968887084Z" level=info msg="StartContainer for \"73652334a3b183ba40794fd9a7b55729d6cb4c7dcd9bf79c76e84123446df0b6\" returns successfully" Dec 13 01:59:03.765125 kubelet[2213]: E1213 01:59:03.765068 2213 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z69kl" podUID="7937f569-a24d-4eec-b55c-c7674aa42251" Dec 13 01:59:03.813652 kubelet[2213]: E1213 01:59:03.813618 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:59:03.823445 kubelet[2213]: I1213 01:59:03.823226 2213 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-8449dcb8f5-nmwjd" podStartSLOduration=1.573043778 podStartE2EDuration="3.823188687s" podCreationTimestamp="2024-12-13 01:59:00 +0000 UTC" firstStartedPulling="2024-12-13 01:59:00.639634849 +0000 UTC m=+21.007983086" lastFinishedPulling="2024-12-13 01:59:02.889779728 +0000 UTC m=+23.258127995" observedRunningTime="2024-12-13 01:59:03.822715987 +0000 UTC m=+24.191064234" watchObservedRunningTime="2024-12-13 01:59:03.823188687 +0000 UTC m=+24.191536924" Dec 13 01:59:03.904274 kubelet[2213]: E1213 01:59:03.904249 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:03.904274 kubelet[2213]: W1213 01:59:03.904268 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:03.904426 kubelet[2213]: E1213 01:59:03.904286 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:03.904452 kubelet[2213]: E1213 01:59:03.904438 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:03.904452 kubelet[2213]: W1213 01:59:03.904443 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:03.904452 kubelet[2213]: E1213 01:59:03.904452 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:03.904617 kubelet[2213]: E1213 01:59:03.904600 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:03.904617 kubelet[2213]: W1213 01:59:03.904610 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:03.904669 kubelet[2213]: E1213 01:59:03.904649 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:03.904833 kubelet[2213]: E1213 01:59:03.904823 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:03.904833 kubelet[2213]: W1213 01:59:03.904830 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:03.904905 kubelet[2213]: E1213 01:59:03.904840 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:03.905012 kubelet[2213]: E1213 01:59:03.905001 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:03.905012 kubelet[2213]: W1213 01:59:03.905009 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:03.905062 kubelet[2213]: E1213 01:59:03.905017 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:03.905151 kubelet[2213]: E1213 01:59:03.905139 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:03.905151 kubelet[2213]: W1213 01:59:03.905146 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:03.905232 kubelet[2213]: E1213 01:59:03.905155 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:03.905303 kubelet[2213]: E1213 01:59:03.905294 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:03.905303 kubelet[2213]: W1213 01:59:03.905301 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:03.905354 kubelet[2213]: E1213 01:59:03.905309 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:03.905452 kubelet[2213]: E1213 01:59:03.905444 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:03.905452 kubelet[2213]: W1213 01:59:03.905451 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:03.905496 kubelet[2213]: E1213 01:59:03.905459 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:03.905599 kubelet[2213]: E1213 01:59:03.905587 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:03.905599 kubelet[2213]: W1213 01:59:03.905594 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:03.905599 kubelet[2213]: E1213 01:59:03.905602 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:03.905730 kubelet[2213]: E1213 01:59:03.905719 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:03.905730 kubelet[2213]: W1213 01:59:03.905726 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:03.905813 kubelet[2213]: E1213 01:59:03.905734 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:03.905892 kubelet[2213]: E1213 01:59:03.905883 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:03.905892 kubelet[2213]: W1213 01:59:03.905890 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:03.905941 kubelet[2213]: E1213 01:59:03.905898 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:03.906038 kubelet[2213]: E1213 01:59:03.906030 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:03.906063 kubelet[2213]: W1213 01:59:03.906038 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:03.906063 kubelet[2213]: E1213 01:59:03.906045 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:03.906171 kubelet[2213]: E1213 01:59:03.906164 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:03.906171 kubelet[2213]: W1213 01:59:03.906171 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:03.906220 kubelet[2213]: E1213 01:59:03.906179 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:03.906302 kubelet[2213]: E1213 01:59:03.906295 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:03.906326 kubelet[2213]: W1213 01:59:03.906302 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:03.906326 kubelet[2213]: E1213 01:59:03.906311 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:03.906432 kubelet[2213]: E1213 01:59:03.906425 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:03.906432 kubelet[2213]: W1213 01:59:03.906432 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:03.906481 kubelet[2213]: E1213 01:59:03.906439 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:03.929826 kubelet[2213]: E1213 01:59:03.929802 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:03.929826 kubelet[2213]: W1213 01:59:03.929819 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:03.929902 kubelet[2213]: E1213 01:59:03.929840 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:03.930057 kubelet[2213]: E1213 01:59:03.930040 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:03.930057 kubelet[2213]: W1213 01:59:03.930052 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:03.930129 kubelet[2213]: E1213 01:59:03.930067 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:03.930229 kubelet[2213]: E1213 01:59:03.930215 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:03.930229 kubelet[2213]: W1213 01:59:03.930224 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:03.930300 kubelet[2213]: E1213 01:59:03.930235 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:03.930378 kubelet[2213]: E1213 01:59:03.930366 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:03.930378 kubelet[2213]: W1213 01:59:03.930374 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:03.930428 kubelet[2213]: E1213 01:59:03.930388 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:03.930580 kubelet[2213]: E1213 01:59:03.930567 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:03.930580 kubelet[2213]: W1213 01:59:03.930580 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:03.930653 kubelet[2213]: E1213 01:59:03.930597 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:03.930759 kubelet[2213]: E1213 01:59:03.930744 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:03.930759 kubelet[2213]: W1213 01:59:03.930756 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:03.930829 kubelet[2213]: E1213 01:59:03.930779 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:03.930936 kubelet[2213]: E1213 01:59:03.930921 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:03.930936 kubelet[2213]: W1213 01:59:03.930931 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:03.931018 kubelet[2213]: E1213 01:59:03.930946 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:03.931125 kubelet[2213]: E1213 01:59:03.931114 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:03.931179 kubelet[2213]: W1213 01:59:03.931124 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:03.931179 kubelet[2213]: E1213 01:59:03.931140 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:03.931284 kubelet[2213]: E1213 01:59:03.931276 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:03.931284 kubelet[2213]: W1213 01:59:03.931283 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:03.931328 kubelet[2213]: E1213 01:59:03.931296 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:03.931432 kubelet[2213]: E1213 01:59:03.931425 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:03.931460 kubelet[2213]: W1213 01:59:03.931431 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:03.931460 kubelet[2213]: E1213 01:59:03.931444 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:03.931573 kubelet[2213]: E1213 01:59:03.931561 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:03.931599 kubelet[2213]: W1213 01:59:03.931580 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:03.931599 kubelet[2213]: E1213 01:59:03.931594 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:03.931755 kubelet[2213]: E1213 01:59:03.931742 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:03.931755 kubelet[2213]: W1213 01:59:03.931750 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:03.931755 kubelet[2213]: E1213 01:59:03.931772 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:03.931927 kubelet[2213]: E1213 01:59:03.931911 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:03.931927 kubelet[2213]: W1213 01:59:03.931922 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:03.932055 kubelet[2213]: E1213 01:59:03.931937 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:03.932081 kubelet[2213]: E1213 01:59:03.932067 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:03.932081 kubelet[2213]: W1213 01:59:03.932073 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:03.932130 kubelet[2213]: E1213 01:59:03.932084 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:03.932259 kubelet[2213]: E1213 01:59:03.932239 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:03.932259 kubelet[2213]: W1213 01:59:03.932248 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:03.932259 kubelet[2213]: E1213 01:59:03.932259 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:03.932471 kubelet[2213]: E1213 01:59:03.932457 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:03.932471 kubelet[2213]: W1213 01:59:03.932468 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:03.932545 kubelet[2213]: E1213 01:59:03.932481 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:03.932693 kubelet[2213]: E1213 01:59:03.932679 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:03.932693 kubelet[2213]: W1213 01:59:03.932688 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:03.932779 kubelet[2213]: E1213 01:59:03.932700 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:03.932858 kubelet[2213]: E1213 01:59:03.932845 2213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:59:03.932858 kubelet[2213]: W1213 01:59:03.932855 2213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:59:03.932910 kubelet[2213]: E1213 01:59:03.932866 2213 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:59:03.995000 audit[2809]: NETFILTER_CFG table=filter:95 family=2 entries=17 op=nft_register_rule pid=2809 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:59:03.995000 audit[2809]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffcf452a340 a2=0 a3=7ffcf452a32c items=0 ppid=2382 pid=2809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:03.995000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:59:04.004000 audit[2809]: NETFILTER_CFG table=nat:96 family=2 entries=19 op=nft_register_chain pid=2809 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:59:04.004000 audit[2809]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7ffcf452a340 a2=0 a3=7ffcf452a32c items=0 ppid=2382 pid=2809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:04.004000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:59:04.747226 env[1308]: time="2024-12-13T01:59:04.747176288Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:59:04.749173 env[1308]: time="2024-12-13T01:59:04.749139371Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:59:04.750740 env[1308]: time="2024-12-13T01:59:04.750717761Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:59:04.752397 env[1308]: time="2024-12-13T01:59:04.752355103Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:59:04.752791 env[1308]: time="2024-12-13T01:59:04.752757349Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Dec 13 01:59:04.754589 env[1308]: time="2024-12-13T01:59:04.754555383Z" level=info msg="CreateContainer within sandbox \"f199fda15efdc091c42141a32ef101f9b5c9f41240cc4a2986e9249b5391bfca\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 01:59:04.766173 env[1308]: time="2024-12-13T01:59:04.766142337Z" level=info msg="CreateContainer within sandbox \"f199fda15efdc091c42141a32ef101f9b5c9f41240cc4a2986e9249b5391bfca\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5a6c3b119442b5355a47ef89b9f94bb6f46b4ccfa4910959c8e378c83833de42\"" Dec 13 01:59:04.766735 env[1308]: time="2024-12-13T01:59:04.766705586Z" level=info msg="StartContainer for \"5a6c3b119442b5355a47ef89b9f94bb6f46b4ccfa4910959c8e378c83833de42\"" Dec 13 01:59:04.818908 kubelet[2213]: E1213 01:59:04.818161 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:59:04.828926 env[1308]: time="2024-12-13T01:59:04.828852489Z" level=info msg="StartContainer for \"5a6c3b119442b5355a47ef89b9f94bb6f46b4ccfa4910959c8e378c83833de42\" returns successfully" Dec 13 01:59:04.895834 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a6c3b119442b5355a47ef89b9f94bb6f46b4ccfa4910959c8e378c83833de42-rootfs.mount: Deactivated successfully. Dec 13 01:59:05.209708 env[1308]: time="2024-12-13T01:59:05.209641450Z" level=info msg="shim disconnected" id=5a6c3b119442b5355a47ef89b9f94bb6f46b4ccfa4910959c8e378c83833de42 Dec 13 01:59:05.209708 env[1308]: time="2024-12-13T01:59:05.209696092Z" level=warning msg="cleaning up after shim disconnected" id=5a6c3b119442b5355a47ef89b9f94bb6f46b4ccfa4910959c8e378c83833de42 namespace=k8s.io Dec 13 01:59:05.209708 env[1308]: time="2024-12-13T01:59:05.209708145Z" level=info msg="cleaning up dead shim" Dec 13 01:59:05.216462 env[1308]: time="2024-12-13T01:59:05.216402539Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:59:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2861 runtime=io.containerd.runc.v2\n" Dec 13 01:59:05.765950 kubelet[2213]: E1213 01:59:05.765891 2213 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z69kl" podUID="7937f569-a24d-4eec-b55c-c7674aa42251" Dec 13 01:59:05.824567 kubelet[2213]: E1213 01:59:05.822525 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:59:05.824567 kubelet[2213]: E1213 01:59:05.822566 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:59:05.826447 env[1308]: time="2024-12-13T01:59:05.826386360Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 01:59:07.765382 kubelet[2213]: E1213 01:59:07.765347 2213 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z69kl" podUID="7937f569-a24d-4eec-b55c-c7674aa42251" Dec 13 01:59:09.765320 kubelet[2213]: E1213 01:59:09.765270 2213 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z69kl" podUID="7937f569-a24d-4eec-b55c-c7674aa42251" Dec 13 01:59:10.884735 env[1308]: time="2024-12-13T01:59:10.884671294Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:59:10.887923 env[1308]: time="2024-12-13T01:59:10.887893551Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:59:10.889823 env[1308]: time="2024-12-13T01:59:10.889776781Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:59:10.891632 env[1308]: time="2024-12-13T01:59:10.891589308Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:59:10.892305 env[1308]: time="2024-12-13T01:59:10.892250822Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Dec 13 01:59:10.894872 env[1308]: time="2024-12-13T01:59:10.894829068Z" level=info msg="CreateContainer within sandbox \"f199fda15efdc091c42141a32ef101f9b5c9f41240cc4a2986e9249b5391bfca\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 01:59:10.908692 env[1308]: time="2024-12-13T01:59:10.908646771Z" level=info msg="CreateContainer within sandbox \"f199fda15efdc091c42141a32ef101f9b5c9f41240cc4a2986e9249b5391bfca\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6ff215b6ef2a15881835575a30fbf87fb69cca946880527ad609f513311e9508\"" Dec 13 01:59:10.909555 env[1308]: time="2024-12-13T01:59:10.909517247Z" level=info msg="StartContainer for \"6ff215b6ef2a15881835575a30fbf87fb69cca946880527ad609f513311e9508\"" Dec 13 01:59:11.759033 env[1308]: time="2024-12-13T01:59:11.758955806Z" level=info msg="StartContainer for \"6ff215b6ef2a15881835575a30fbf87fb69cca946880527ad609f513311e9508\" returns successfully" Dec 13 01:59:11.765698 kubelet[2213]: E1213 01:59:11.765653 2213 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z69kl" podUID="7937f569-a24d-4eec-b55c-c7674aa42251" Dec 13 01:59:11.833585 kubelet[2213]: E1213 01:59:11.833550 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:59:12.834820 kubelet[2213]: E1213 01:59:12.834751 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:59:12.902592 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ff215b6ef2a15881835575a30fbf87fb69cca946880527ad609f513311e9508-rootfs.mount: Deactivated successfully. Dec 13 01:59:12.936781 kubelet[2213]: I1213 01:59:12.935555 2213 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:59:13.025376 env[1308]: time="2024-12-13T01:59:13.025316480Z" level=info msg="shim disconnected" id=6ff215b6ef2a15881835575a30fbf87fb69cca946880527ad609f513311e9508 Dec 13 01:59:13.025841 env[1308]: time="2024-12-13T01:59:13.025380080Z" level=warning msg="cleaning up after shim disconnected" id=6ff215b6ef2a15881835575a30fbf87fb69cca946880527ad609f513311e9508 namespace=k8s.io Dec 13 01:59:13.025841 env[1308]: time="2024-12-13T01:59:13.025397352Z" level=info msg="cleaning up dead shim" Dec 13 01:59:13.034881 env[1308]: time="2024-12-13T01:59:13.034840646Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:59:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2924 runtime=io.containerd.runc.v2\n" Dec 13 01:59:13.036499 kubelet[2213]: I1213 01:59:13.035670 2213 topology_manager.go:215] "Topology Admit Handler" podUID="d5f89f4f-4968-46b5-9f7b-bc1d64f6aef1" podNamespace="kube-system" podName="coredns-76f75df574-dfz7m" Dec 13 01:59:13.039854 kubelet[2213]: I1213 01:59:13.039821 2213 topology_manager.go:215] "Topology Admit Handler" podUID="55f83b0b-5364-4958-b303-1b06d5dd6c20" podNamespace="calico-apiserver" podName="calico-apiserver-76c6f6c975-mmn2h" Dec 13 01:59:13.040076 kubelet[2213]: I1213 01:59:13.039955 2213 topology_manager.go:215] "Topology Admit Handler" podUID="0356ebac-7712-4e16-9963-c87ca7672297" podNamespace="kube-system" podName="coredns-76f75df574-jqxcg" Dec 13 01:59:13.040076 kubelet[2213]: I1213 01:59:13.040049 2213 topology_manager.go:215] "Topology Admit Handler" podUID="486a846a-be07-4723-8e84-72e633e51630" podNamespace="calico-apiserver" podName="calico-apiserver-76c6f6c975-rhnlz" Dec 13 01:59:13.042332 kubelet[2213]: I1213 01:59:13.042310 2213 topology_manager.go:215] "Topology Admit Handler" podUID="4746d340-1c7d-4392-8db5-c68575618d26" podNamespace="calico-system" podName="calico-kube-controllers-6ff7c669bd-mkmgc" Dec 13 01:59:13.096436 kubelet[2213]: I1213 01:59:13.096305 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j7t9\" (UniqueName: \"kubernetes.io/projected/0356ebac-7712-4e16-9963-c87ca7672297-kube-api-access-4j7t9\") pod \"coredns-76f75df574-jqxcg\" (UID: \"0356ebac-7712-4e16-9963-c87ca7672297\") " pod="kube-system/coredns-76f75df574-jqxcg" Dec 13 01:59:13.096436 kubelet[2213]: I1213 01:59:13.096350 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/55f83b0b-5364-4958-b303-1b06d5dd6c20-calico-apiserver-certs\") pod \"calico-apiserver-76c6f6c975-mmn2h\" (UID: \"55f83b0b-5364-4958-b303-1b06d5dd6c20\") " pod="calico-apiserver/calico-apiserver-76c6f6c975-mmn2h" Dec 13 01:59:13.096436 kubelet[2213]: I1213 01:59:13.096370 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz5w2\" (UniqueName: \"kubernetes.io/projected/d5f89f4f-4968-46b5-9f7b-bc1d64f6aef1-kube-api-access-dz5w2\") pod \"coredns-76f75df574-dfz7m\" (UID: \"d5f89f4f-4968-46b5-9f7b-bc1d64f6aef1\") " pod="kube-system/coredns-76f75df574-dfz7m" Dec 13 01:59:13.096436 kubelet[2213]: I1213 01:59:13.096388 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-649pz\" (UniqueName: \"kubernetes.io/projected/486a846a-be07-4723-8e84-72e633e51630-kube-api-access-649pz\") pod \"calico-apiserver-76c6f6c975-rhnlz\" (UID: \"486a846a-be07-4723-8e84-72e633e51630\") " pod="calico-apiserver/calico-apiserver-76c6f6c975-rhnlz" Dec 13 01:59:13.096436 kubelet[2213]: I1213 01:59:13.096408 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4746d340-1c7d-4392-8db5-c68575618d26-tigera-ca-bundle\") pod \"calico-kube-controllers-6ff7c669bd-mkmgc\" (UID: \"4746d340-1c7d-4392-8db5-c68575618d26\") " pod="calico-system/calico-kube-controllers-6ff7c669bd-mkmgc" Dec 13 01:59:13.096829 kubelet[2213]: I1213 01:59:13.096522 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfmc5\" (UniqueName: \"kubernetes.io/projected/4746d340-1c7d-4392-8db5-c68575618d26-kube-api-access-xfmc5\") pod \"calico-kube-controllers-6ff7c669bd-mkmgc\" (UID: \"4746d340-1c7d-4392-8db5-c68575618d26\") " pod="calico-system/calico-kube-controllers-6ff7c669bd-mkmgc" Dec 13 01:59:13.096829 kubelet[2213]: I1213 01:59:13.096616 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/486a846a-be07-4723-8e84-72e633e51630-calico-apiserver-certs\") pod \"calico-apiserver-76c6f6c975-rhnlz\" (UID: \"486a846a-be07-4723-8e84-72e633e51630\") " pod="calico-apiserver/calico-apiserver-76c6f6c975-rhnlz" Dec 13 01:59:13.096829 kubelet[2213]: I1213 01:59:13.096663 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvqll\" (UniqueName: \"kubernetes.io/projected/55f83b0b-5364-4958-b303-1b06d5dd6c20-kube-api-access-hvqll\") pod \"calico-apiserver-76c6f6c975-mmn2h\" (UID: \"55f83b0b-5364-4958-b303-1b06d5dd6c20\") " pod="calico-apiserver/calico-apiserver-76c6f6c975-mmn2h" Dec 13 01:59:13.096829 kubelet[2213]: I1213 01:59:13.096697 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d5f89f4f-4968-46b5-9f7b-bc1d64f6aef1-config-volume\") pod \"coredns-76f75df574-dfz7m\" (UID: \"d5f89f4f-4968-46b5-9f7b-bc1d64f6aef1\") " pod="kube-system/coredns-76f75df574-dfz7m" Dec 13 01:59:13.096829 kubelet[2213]: I1213 01:59:13.096729 2213 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0356ebac-7712-4e16-9963-c87ca7672297-config-volume\") pod \"coredns-76f75df574-jqxcg\" (UID: \"0356ebac-7712-4e16-9963-c87ca7672297\") " pod="kube-system/coredns-76f75df574-jqxcg" Dec 13 01:59:13.166391 systemd[1]: Started sshd@7-10.0.0.48:22-10.0.0.1:39752.service. Dec 13 01:59:13.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.48:22-10.0.0.1:39752 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:13.167667 kernel: kauditd_printk_skb: 14 callbacks suppressed Dec 13 01:59:13.167732 kernel: audit: type=1130 audit(1734055153.165:288): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.48:22-10.0.0.1:39752 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:13.205803 sshd[2936]: Accepted publickey for core from 10.0.0.1 port 39752 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 01:59:13.204622 sshd[2936]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:59:13.202000 audit[2936]: USER_ACCT pid=2936 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:13.219171 kernel: audit: type=1101 audit(1734055153.202:289): pid=2936 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:13.219283 kernel: audit: type=1103 audit(1734055153.203:290): pid=2936 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:13.203000 audit[2936]: CRED_ACQ pid=2936 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:13.224441 kernel: audit: type=1006 audit(1734055153.203:291): pid=2936 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Dec 13 01:59:13.203000 audit[2936]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe4b064d40 a2=3 a3=0 items=0 ppid=1 pid=2936 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:13.228333 systemd-logind[1291]: New session 8 of user core. Dec 13 01:59:13.229469 systemd[1]: Started session-8.scope. Dec 13 01:59:13.229695 kernel: audit: type=1300 audit(1734055153.203:291): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe4b064d40 a2=3 a3=0 items=0 ppid=1 pid=2936 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:13.203000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 01:59:13.231795 kernel: audit: type=1327 audit(1734055153.203:291): proctitle=737368643A20636F7265205B707269765D Dec 13 01:59:13.233000 audit[2936]: USER_START pid=2936 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:13.234000 audit[2946]: CRED_ACQ pid=2946 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:13.243928 kernel: audit: type=1105 audit(1734055153.233:292): pid=2936 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:13.244006 kernel: audit: type=1103 audit(1734055153.234:293): pid=2946 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:13.344920 kubelet[2213]: E1213 01:59:13.344868 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:59:13.345427 env[1308]: time="2024-12-13T01:59:13.345395810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-jqxcg,Uid:0356ebac-7712-4e16-9963-c87ca7672297,Namespace:kube-system,Attempt:0,}" Dec 13 01:59:13.345857 kubelet[2213]: E1213 01:59:13.345843 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:59:13.346086 env[1308]: time="2024-12-13T01:59:13.346063836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-dfz7m,Uid:d5f89f4f-4968-46b5-9f7b-bc1d64f6aef1,Namespace:kube-system,Attempt:0,}" Dec 13 01:59:13.346398 env[1308]: time="2024-12-13T01:59:13.346372776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76c6f6c975-rhnlz,Uid:486a846a-be07-4723-8e84-72e633e51630,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:59:13.354695 env[1308]: time="2024-12-13T01:59:13.354584465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76c6f6c975-mmn2h,Uid:55f83b0b-5364-4958-b303-1b06d5dd6c20,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:59:13.355673 env[1308]: time="2024-12-13T01:59:13.355636873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6ff7c669bd-mkmgc,Uid:4746d340-1c7d-4392-8db5-c68575618d26,Namespace:calico-system,Attempt:0,}" Dec 13 01:59:13.360442 sshd[2936]: pam_unix(sshd:session): session closed for user core Dec 13 01:59:13.360000 audit[2936]: USER_END pid=2936 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:13.366112 systemd[1]: sshd@7-10.0.0.48:22-10.0.0.1:39752.service: Deactivated successfully. Dec 13 01:59:13.366826 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:59:13.360000 audit[2936]: CRED_DISP pid=2936 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:13.367372 systemd-logind[1291]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:59:13.368048 systemd-logind[1291]: Removed session 8. Dec 13 01:59:13.371417 kernel: audit: type=1106 audit(1734055153.360:294): pid=2936 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:13.371600 kernel: audit: type=1104 audit(1734055153.360:295): pid=2936 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:13.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.48:22-10.0.0.1:39752 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:13.483520 env[1308]: time="2024-12-13T01:59:13.483426468Z" level=error msg="Failed to destroy network for sandbox \"979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:59:13.483938 env[1308]: time="2024-12-13T01:59:13.483897403Z" level=error msg="encountered an error cleaning up failed sandbox \"979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:59:13.483978 env[1308]: time="2024-12-13T01:59:13.483956384Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-jqxcg,Uid:0356ebac-7712-4e16-9963-c87ca7672297,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:59:13.484603 kubelet[2213]: E1213 01:59:13.484220 2213 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:59:13.484603 kubelet[2213]: E1213 01:59:13.484291 2213 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-jqxcg" Dec 13 01:59:13.484603 kubelet[2213]: E1213 01:59:13.484336 2213 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-jqxcg" Dec 13 01:59:13.484731 kubelet[2213]: E1213 01:59:13.484401 2213 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-jqxcg_kube-system(0356ebac-7712-4e16-9963-c87ca7672297)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-jqxcg_kube-system(0356ebac-7712-4e16-9963-c87ca7672297)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-jqxcg" podUID="0356ebac-7712-4e16-9963-c87ca7672297" Dec 13 01:59:13.492898 env[1308]: time="2024-12-13T01:59:13.492819156Z" level=error msg="Failed to destroy network for sandbox \"c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:59:13.493226 env[1308]: time="2024-12-13T01:59:13.493197788Z" level=error msg="encountered an error cleaning up failed sandbox \"c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:59:13.493295 env[1308]: time="2024-12-13T01:59:13.493247541Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76c6f6c975-mmn2h,Uid:55f83b0b-5364-4958-b303-1b06d5dd6c20,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:59:13.494401 kubelet[2213]: E1213 01:59:13.493489 2213 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:59:13.494401 kubelet[2213]: E1213 01:59:13.493545 2213 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76c6f6c975-mmn2h" Dec 13 01:59:13.494401 kubelet[2213]: E1213 01:59:13.493568 2213 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76c6f6c975-mmn2h" Dec 13 01:59:13.494537 kubelet[2213]: E1213 01:59:13.493623 2213 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76c6f6c975-mmn2h_calico-apiserver(55f83b0b-5364-4958-b303-1b06d5dd6c20)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76c6f6c975-mmn2h_calico-apiserver(55f83b0b-5364-4958-b303-1b06d5dd6c20)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76c6f6c975-mmn2h" podUID="55f83b0b-5364-4958-b303-1b06d5dd6c20" Dec 13 01:59:13.503623 env[1308]: time="2024-12-13T01:59:13.503542184Z" level=error msg="Failed to destroy network for sandbox \"f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:59:13.504060 env[1308]: time="2024-12-13T01:59:13.504012467Z" level=error msg="encountered an error cleaning up failed sandbox \"f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:59:13.504111 env[1308]: time="2024-12-13T01:59:13.504068903Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-dfz7m,Uid:d5f89f4f-4968-46b5-9f7b-bc1d64f6aef1,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:59:13.504386 kubelet[2213]: E1213 01:59:13.504343 2213 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:59:13.504553 kubelet[2213]: E1213 01:59:13.504429 2213 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-dfz7m" Dec 13 01:59:13.504553 kubelet[2213]: E1213 01:59:13.504456 2213 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-dfz7m" Dec 13 01:59:13.504553 kubelet[2213]: E1213 01:59:13.504505 2213 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-dfz7m_kube-system(d5f89f4f-4968-46b5-9f7b-bc1d64f6aef1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-dfz7m_kube-system(d5f89f4f-4968-46b5-9f7b-bc1d64f6aef1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-dfz7m" podUID="d5f89f4f-4968-46b5-9f7b-bc1d64f6aef1" Dec 13 01:59:13.504852 env[1308]: time="2024-12-13T01:59:13.504819934Z" level=error msg="Failed to destroy network for sandbox \"38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:59:13.505224 env[1308]: time="2024-12-13T01:59:13.505193446Z" level=error msg="encountered an error cleaning up failed sandbox \"38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:59:13.505355 env[1308]: time="2024-12-13T01:59:13.505320144Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76c6f6c975-rhnlz,Uid:486a846a-be07-4723-8e84-72e633e51630,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:59:13.505922 kubelet[2213]: E1213 01:59:13.505709 2213 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:59:13.505922 kubelet[2213]: E1213 01:59:13.505799 2213 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76c6f6c975-rhnlz" Dec 13 01:59:13.505922 kubelet[2213]: E1213 01:59:13.505826 2213 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76c6f6c975-rhnlz" Dec 13 01:59:13.506050 kubelet[2213]: E1213 01:59:13.505889 2213 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76c6f6c975-rhnlz_calico-apiserver(486a846a-be07-4723-8e84-72e633e51630)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76c6f6c975-rhnlz_calico-apiserver(486a846a-be07-4723-8e84-72e633e51630)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76c6f6c975-rhnlz" podUID="486a846a-be07-4723-8e84-72e633e51630" Dec 13 01:59:13.508465 env[1308]: time="2024-12-13T01:59:13.508403187Z" level=error msg="Failed to destroy network for sandbox \"d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:59:13.508815 env[1308]: time="2024-12-13T01:59:13.508785405Z" level=error msg="encountered an error cleaning up failed sandbox \"d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:59:13.508904 env[1308]: time="2024-12-13T01:59:13.508830610Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6ff7c669bd-mkmgc,Uid:4746d340-1c7d-4392-8db5-c68575618d26,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:59:13.509067 kubelet[2213]: E1213 01:59:13.509026 2213 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:59:13.509140 kubelet[2213]: E1213 01:59:13.509098 2213 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6ff7c669bd-mkmgc" Dec 13 01:59:13.509140 kubelet[2213]: E1213 01:59:13.509129 2213 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6ff7c669bd-mkmgc" Dec 13 01:59:13.509201 kubelet[2213]: E1213 01:59:13.509183 2213 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6ff7c669bd-mkmgc_calico-system(4746d340-1c7d-4392-8db5-c68575618d26)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6ff7c669bd-mkmgc_calico-system(4746d340-1c7d-4392-8db5-c68575618d26)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6ff7c669bd-mkmgc" podUID="4746d340-1c7d-4392-8db5-c68575618d26" Dec 13 01:59:13.767623 env[1308]: time="2024-12-13T01:59:13.767513422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z69kl,Uid:7937f569-a24d-4eec-b55c-c7674aa42251,Namespace:calico-system,Attempt:0,}" Dec 13 01:59:13.822954 env[1308]: time="2024-12-13T01:59:13.822893726Z" level=error msg="Failed to destroy network for sandbox \"8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:59:13.823258 env[1308]: time="2024-12-13T01:59:13.823221312Z" level=error msg="encountered an error cleaning up failed sandbox \"8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:59:13.823316 env[1308]: time="2024-12-13T01:59:13.823272749Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z69kl,Uid:7937f569-a24d-4eec-b55c-c7674aa42251,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:59:13.823536 kubelet[2213]: E1213 01:59:13.823508 2213 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:59:13.823589 kubelet[2213]: E1213 01:59:13.823562 2213 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z69kl" Dec 13 01:59:13.823589 kubelet[2213]: E1213 01:59:13.823584 2213 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z69kl" Dec 13 01:59:13.823643 kubelet[2213]: E1213 01:59:13.823638 2213 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-z69kl_calico-system(7937f569-a24d-4eec-b55c-c7674aa42251)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-z69kl_calico-system(7937f569-a24d-4eec-b55c-c7674aa42251)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z69kl" podUID="7937f569-a24d-4eec-b55c-c7674aa42251" Dec 13 01:59:13.836249 kubelet[2213]: I1213 01:59:13.836222 2213 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c" Dec 13 01:59:13.836870 env[1308]: time="2024-12-13T01:59:13.836823650Z" level=info msg="StopPodSandbox for \"f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c\"" Dec 13 01:59:13.838223 kubelet[2213]: E1213 01:59:13.838193 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:59:13.839250 kubelet[2213]: I1213 01:59:13.839228 2213 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05" Dec 13 01:59:13.839533 env[1308]: time="2024-12-13T01:59:13.839202390Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 01:59:13.840860 env[1308]: time="2024-12-13T01:59:13.840701326Z" level=info msg="StopPodSandbox for \"c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05\"" Dec 13 01:59:13.847836 kubelet[2213]: I1213 01:59:13.845966 2213 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d" Dec 13 01:59:13.847836 kubelet[2213]: I1213 01:59:13.847663 2213 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa" Dec 13 01:59:13.848110 env[1308]: time="2024-12-13T01:59:13.846536119Z" level=info msg="StopPodSandbox for \"38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d\"" Dec 13 01:59:13.848280 env[1308]: time="2024-12-13T01:59:13.848243086Z" level=info msg="StopPodSandbox for \"8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa\"" Dec 13 01:59:13.849333 kubelet[2213]: I1213 01:59:13.849305 2213 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28" Dec 13 01:59:13.849857 env[1308]: time="2024-12-13T01:59:13.849823395Z" level=info msg="StopPodSandbox for \"979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28\"" Dec 13 01:59:13.851079 kubelet[2213]: I1213 01:59:13.851049 2213 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836" Dec 13 01:59:13.851592 env[1308]: time="2024-12-13T01:59:13.851554739Z" level=info msg="StopPodSandbox for \"d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836\"" Dec 13 01:59:13.879312 env[1308]: time="2024-12-13T01:59:13.879244625Z" level=error msg="StopPodSandbox for \"f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c\" failed" error="failed to destroy network for sandbox \"f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:59:13.879570 kubelet[2213]: E1213 01:59:13.879542 2213 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c" Dec 13 01:59:13.879652 kubelet[2213]: E1213 01:59:13.879642 2213 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c"} Dec 13 01:59:13.879701 kubelet[2213]: E1213 01:59:13.879690 2213 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d5f89f4f-4968-46b5-9f7b-bc1d64f6aef1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:59:13.879825 kubelet[2213]: E1213 01:59:13.879739 2213 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d5f89f4f-4968-46b5-9f7b-bc1d64f6aef1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-dfz7m" podUID="d5f89f4f-4968-46b5-9f7b-bc1d64f6aef1" Dec 13 01:59:13.896216 env[1308]: time="2024-12-13T01:59:13.896142224Z" level=error msg="StopPodSandbox for \"c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05\" failed" error="failed to destroy network for sandbox \"c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:59:13.896697 kubelet[2213]: E1213 01:59:13.896665 2213 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05" Dec 13 01:59:13.896823 kubelet[2213]: E1213 01:59:13.896726 2213 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05"} Dec 13 01:59:13.896823 kubelet[2213]: E1213 01:59:13.896792 2213 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"55f83b0b-5364-4958-b303-1b06d5dd6c20\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:59:13.896937 kubelet[2213]: E1213 01:59:13.896844 2213 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"55f83b0b-5364-4958-b303-1b06d5dd6c20\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76c6f6c975-mmn2h" podUID="55f83b0b-5364-4958-b303-1b06d5dd6c20" Dec 13 01:59:13.898878 env[1308]: time="2024-12-13T01:59:13.898824624Z" level=error msg="StopPodSandbox for \"38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d\" failed" error="failed to destroy network for sandbox \"38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:59:13.899062 kubelet[2213]: E1213 01:59:13.899041 2213 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d" Dec 13 01:59:13.899341 kubelet[2213]: E1213 01:59:13.899319 2213 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d"} Dec 13 01:59:13.899522 kubelet[2213]: E1213 01:59:13.899443 2213 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"486a846a-be07-4723-8e84-72e633e51630\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:59:13.899617 kubelet[2213]: E1213 01:59:13.899563 2213 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"486a846a-be07-4723-8e84-72e633e51630\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76c6f6c975-rhnlz" podUID="486a846a-be07-4723-8e84-72e633e51630" Dec 13 01:59:13.921854 env[1308]: time="2024-12-13T01:59:13.921792509Z" level=error msg="StopPodSandbox for \"8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa\" failed" error="failed to destroy network for sandbox \"8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:59:13.922120 kubelet[2213]: E1213 01:59:13.922094 2213 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa" Dec 13 01:59:13.922191 kubelet[2213]: E1213 01:59:13.922151 2213 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa"} Dec 13 01:59:13.922226 kubelet[2213]: E1213 01:59:13.922196 2213 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7937f569-a24d-4eec-b55c-c7674aa42251\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:59:13.922301 kubelet[2213]: E1213 01:59:13.922249 2213 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7937f569-a24d-4eec-b55c-c7674aa42251\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z69kl" podUID="7937f569-a24d-4eec-b55c-c7674aa42251" Dec 13 01:59:13.925591 env[1308]: time="2024-12-13T01:59:13.925551000Z" level=error msg="StopPodSandbox for \"d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836\" failed" error="failed to destroy network for sandbox \"d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:59:13.926069 kubelet[2213]: E1213 01:59:13.925912 2213 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836" Dec 13 01:59:13.926069 kubelet[2213]: E1213 01:59:13.925953 2213 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836"} Dec 13 01:59:13.926069 kubelet[2213]: E1213 01:59:13.926003 2213 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4746d340-1c7d-4392-8db5-c68575618d26\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:59:13.926069 kubelet[2213]: E1213 01:59:13.926042 2213 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4746d340-1c7d-4392-8db5-c68575618d26\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6ff7c669bd-mkmgc" podUID="4746d340-1c7d-4392-8db5-c68575618d26" Dec 13 01:59:13.926385 env[1308]: time="2024-12-13T01:59:13.926347938Z" level=error msg="StopPodSandbox for \"979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28\" failed" error="failed to destroy network for sandbox \"979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:59:13.926583 kubelet[2213]: E1213 01:59:13.926558 2213 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28" Dec 13 01:59:13.926635 kubelet[2213]: E1213 01:59:13.926586 2213 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28"} Dec 13 01:59:13.926635 kubelet[2213]: E1213 01:59:13.926612 2213 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0356ebac-7712-4e16-9963-c87ca7672297\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:59:13.926635 kubelet[2213]: E1213 01:59:13.926634 2213 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0356ebac-7712-4e16-9963-c87ca7672297\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-jqxcg" podUID="0356ebac-7712-4e16-9963-c87ca7672297" Dec 13 01:59:18.363977 systemd[1]: Started sshd@8-10.0.0.48:22-10.0.0.1:56194.service. Dec 13 01:59:18.369411 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 01:59:18.369456 kernel: audit: type=1130 audit(1734055158.363:297): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.48:22-10.0.0.1:56194 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:18.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.48:22-10.0.0.1:56194 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:18.460086 sshd[3330]: Accepted publickey for core from 10.0.0.1 port 56194 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 01:59:18.459000 audit[3330]: USER_ACCT pid=3330 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:18.466096 kernel: audit: type=1101 audit(1734055158.459:298): pid=3330 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:18.466242 kernel: audit: type=1103 audit(1734055158.464:299): pid=3330 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:18.464000 audit[3330]: CRED_ACQ pid=3330 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:18.466332 sshd[3330]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:59:18.474436 kernel: audit: type=1006 audit(1734055158.464:300): pid=3330 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Dec 13 01:59:18.475775 kernel: audit: type=1300 audit(1734055158.464:300): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff09d17c80 a2=3 a3=0 items=0 ppid=1 pid=3330 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:18.464000 audit[3330]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff09d17c80 a2=3 a3=0 items=0 ppid=1 pid=3330 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:18.476123 systemd[1]: Started session-9.scope. Dec 13 01:59:18.478987 systemd-logind[1291]: New session 9 of user core. Dec 13 01:59:18.482788 kernel: audit: type=1327 audit(1734055158.464:300): proctitle=737368643A20636F7265205B707269765D Dec 13 01:59:18.464000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 01:59:18.493000 audit[3330]: USER_START pid=3330 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:18.496000 audit[3333]: CRED_ACQ pid=3333 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:18.503414 kernel: audit: type=1105 audit(1734055158.493:301): pid=3330 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:18.503468 kernel: audit: type=1103 audit(1734055158.496:302): pid=3333 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:18.616286 sshd[3330]: pam_unix(sshd:session): session closed for user core Dec 13 01:59:18.617000 audit[3330]: USER_END pid=3330 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:18.619919 systemd[1]: sshd@8-10.0.0.48:22-10.0.0.1:56194.service: Deactivated successfully. Dec 13 01:59:18.621089 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:59:18.621756 systemd-logind[1291]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:59:18.617000 audit[3330]: CRED_DISP pid=3330 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:18.622675 systemd-logind[1291]: Removed session 9. Dec 13 01:59:18.626140 kernel: audit: type=1106 audit(1734055158.617:303): pid=3330 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:18.626236 kernel: audit: type=1104 audit(1734055158.617:304): pid=3330 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:18.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.48:22-10.0.0.1:56194 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:20.080968 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3228258337.mount: Deactivated successfully. Dec 13 01:59:21.204862 env[1308]: time="2024-12-13T01:59:21.204791610Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:59:21.207021 env[1308]: time="2024-12-13T01:59:21.206991811Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:59:21.208640 env[1308]: time="2024-12-13T01:59:21.208608306Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:59:21.210076 env[1308]: time="2024-12-13T01:59:21.210034353Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:59:21.210505 env[1308]: time="2024-12-13T01:59:21.210443391Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Dec 13 01:59:21.219388 env[1308]: time="2024-12-13T01:59:21.219347168Z" level=info msg="CreateContainer within sandbox \"f199fda15efdc091c42141a32ef101f9b5c9f41240cc4a2986e9249b5391bfca\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 01:59:21.235507 env[1308]: time="2024-12-13T01:59:21.235447486Z" level=info msg="CreateContainer within sandbox \"f199fda15efdc091c42141a32ef101f9b5c9f41240cc4a2986e9249b5391bfca\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"405b99dc86b75d2b7ef93a1c646494d3a6419c78071834b774a4a3f5ca7544ae\"" Dec 13 01:59:21.236120 env[1308]: time="2024-12-13T01:59:21.236073231Z" level=info msg="StartContainer for \"405b99dc86b75d2b7ef93a1c646494d3a6419c78071834b774a4a3f5ca7544ae\"" Dec 13 01:59:21.287465 env[1308]: time="2024-12-13T01:59:21.287409494Z" level=info msg="StartContainer for \"405b99dc86b75d2b7ef93a1c646494d3a6419c78071834b774a4a3f5ca7544ae\" returns successfully" Dec 13 01:59:21.364055 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 01:59:21.364234 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 01:59:21.868653 kubelet[2213]: E1213 01:59:21.868624 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:59:21.935137 kubelet[2213]: I1213 01:59:21.935095 2213 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-s8cbw" podStartSLOduration=1.658562881 podStartE2EDuration="21.935038131s" podCreationTimestamp="2024-12-13 01:59:00 +0000 UTC" firstStartedPulling="2024-12-13 01:59:00.934341933 +0000 UTC m=+21.302690170" lastFinishedPulling="2024-12-13 01:59:21.210817173 +0000 UTC m=+41.579165420" observedRunningTime="2024-12-13 01:59:21.934900473 +0000 UTC m=+42.303248710" watchObservedRunningTime="2024-12-13 01:59:21.935038131 +0000 UTC m=+42.303386368" Dec 13 01:59:22.633000 audit[3454]: AVC avc: denied { write } for pid=3454 comm="tee" name="fd" dev="proc" ino=25058 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 01:59:22.633000 audit[3454]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcb3619a28 a2=241 a3=1b6 items=1 ppid=3423 pid=3454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:22.633000 audit: CWD cwd="/etc/service/enabled/cni/log" Dec 13 01:59:22.633000 audit: PATH item=0 name="/dev/fd/63" inode=23345 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:22.633000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 01:59:22.637000 audit[3447]: AVC avc: denied { write } for pid=3447 comm="tee" name="fd" dev="proc" ino=22440 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 01:59:22.637000 audit[3447]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe2dfa7a16 a2=241 a3=1b6 items=1 ppid=3422 pid=3447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:22.637000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Dec 13 01:59:22.637000 audit: PATH item=0 name="/dev/fd/63" inode=23340 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:22.637000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 01:59:22.637000 audit[3466]: AVC avc: denied { write } for pid=3466 comm="tee" name="fd" dev="proc" ino=22444 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 01:59:22.637000 audit[3466]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc4a442a17 a2=241 a3=1b6 items=1 ppid=3429 pid=3466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:22.637000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Dec 13 01:59:22.637000 audit: PATH item=0 name="/dev/fd/63" inode=24257 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:22.637000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 01:59:22.643000 audit[3483]: AVC avc: denied { write } for pid=3483 comm="tee" name="fd" dev="proc" ino=22452 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 01:59:22.643000 audit[3483]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcdf837a26 a2=241 a3=1b6 items=1 ppid=3427 pid=3483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:22.643000 audit: CWD cwd="/etc/service/enabled/bird6/log" Dec 13 01:59:22.643000 audit: PATH item=0 name="/dev/fd/63" inode=25070 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:22.643000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 01:59:22.649000 audit[3486]: AVC avc: denied { write } for pid=3486 comm="tee" name="fd" dev="proc" ino=22458 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 01:59:22.649000 audit[3486]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff098caa26 a2=241 a3=1b6 items=1 ppid=3431 pid=3486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:22.649000 audit: CWD cwd="/etc/service/enabled/confd/log" Dec 13 01:59:22.649000 audit: PATH item=0 name="/dev/fd/63" inode=25071 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:22.649000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 01:59:22.651000 audit[3473]: AVC avc: denied { write } for pid=3473 comm="tee" name="fd" dev="proc" ino=25074 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 01:59:22.651000 audit[3473]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc37009a26 a2=241 a3=1b6 items=1 ppid=3428 pid=3473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:22.651000 audit: CWD cwd="/etc/service/enabled/felix/log" Dec 13 01:59:22.651000 audit: PATH item=0 name="/dev/fd/63" inode=25068 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:22.651000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 01:59:22.663000 audit[3498]: AVC avc: denied { write } for pid=3498 comm="tee" name="fd" dev="proc" ino=22466 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 01:59:22.663000 audit[3498]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc0aa56a27 a2=241 a3=1b6 items=1 ppid=3420 pid=3498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:22.663000 audit: CWD cwd="/etc/service/enabled/bird/log" Dec 13 01:59:22.663000 audit: PATH item=0 name="/dev/fd/63" inode=24262 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 01:59:22.663000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 01:59:22.811000 audit[3536]: AVC avc: denied { bpf } for pid=3536 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.811000 audit[3536]: AVC avc: denied { bpf } for pid=3536 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.811000 audit[3536]: AVC avc: denied { perfmon } for pid=3536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.811000 audit[3536]: AVC avc: denied { perfmon } for pid=3536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.811000 audit[3536]: AVC avc: denied { perfmon } for pid=3536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.811000 audit[3536]: AVC avc: denied { perfmon } for pid=3536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.811000 audit[3536]: AVC avc: denied { perfmon } for pid=3536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.811000 audit[3536]: AVC avc: denied { bpf } for pid=3536 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.811000 audit[3536]: AVC avc: denied { bpf } for pid=3536 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.811000 audit: BPF prog-id=10 op=LOAD Dec 13 01:59:22.811000 audit[3536]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff1f830820 a2=98 a3=3 items=0 ppid=3436 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:22.811000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 01:59:22.811000 audit: BPF prog-id=10 op=UNLOAD Dec 13 01:59:22.811000 audit[3536]: AVC avc: denied { bpf } for pid=3536 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.811000 audit[3536]: AVC avc: denied { bpf } for pid=3536 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.811000 audit[3536]: AVC avc: denied { perfmon } for pid=3536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.811000 audit[3536]: AVC avc: denied { perfmon } for pid=3536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.811000 audit[3536]: AVC avc: denied { perfmon } for pid=3536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.811000 audit[3536]: AVC avc: denied { perfmon } for pid=3536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.811000 audit[3536]: AVC avc: denied { perfmon } for pid=3536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.811000 audit[3536]: AVC avc: denied { bpf } for pid=3536 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.811000 audit[3536]: AVC avc: denied { bpf } for pid=3536 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.811000 audit: BPF prog-id=11 op=LOAD Dec 13 01:59:22.811000 audit[3536]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff1f830600 a2=74 a3=540051 items=0 ppid=3436 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:22.811000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 01:59:22.811000 audit: BPF prog-id=11 op=UNLOAD Dec 13 01:59:22.811000 audit[3536]: AVC avc: denied { bpf } for pid=3536 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.811000 audit[3536]: AVC avc: denied { bpf } for pid=3536 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.811000 audit[3536]: AVC avc: denied { perfmon } for pid=3536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.811000 audit[3536]: AVC avc: denied { perfmon } for pid=3536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.811000 audit[3536]: AVC avc: denied { perfmon } for pid=3536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.811000 audit[3536]: AVC avc: denied { perfmon } for pid=3536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.811000 audit[3536]: AVC avc: denied { perfmon } for pid=3536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.811000 audit[3536]: AVC avc: denied { bpf } for pid=3536 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.811000 audit[3536]: AVC avc: denied { bpf } for pid=3536 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.811000 audit: BPF prog-id=12 op=LOAD Dec 13 01:59:22.811000 audit[3536]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff1f830630 a2=94 a3=2 items=0 ppid=3436 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:22.811000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 01:59:22.811000 audit: BPF prog-id=12 op=UNLOAD Dec 13 01:59:22.916000 audit[3536]: AVC avc: denied { bpf } for pid=3536 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.916000 audit[3536]: AVC avc: denied { bpf } for pid=3536 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.916000 audit[3536]: AVC avc: denied { perfmon } for pid=3536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.916000 audit[3536]: AVC avc: denied { perfmon } for pid=3536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.916000 audit[3536]: AVC avc: denied { perfmon } for pid=3536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.916000 audit[3536]: AVC avc: denied { perfmon } for pid=3536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.916000 audit[3536]: AVC avc: denied { perfmon } for pid=3536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.916000 audit[3536]: AVC avc: denied { bpf } for pid=3536 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.916000 audit[3536]: AVC avc: denied { bpf } for pid=3536 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.916000 audit: BPF prog-id=13 op=LOAD Dec 13 01:59:22.916000 audit[3536]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff1f8304f0 a2=40 a3=1 items=0 ppid=3436 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:22.916000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 01:59:22.916000 audit: BPF prog-id=13 op=UNLOAD Dec 13 01:59:22.916000 audit[3536]: AVC avc: denied { perfmon } for pid=3536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.916000 audit[3536]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7fff1f8305c0 a2=50 a3=7fff1f8306a0 items=0 ppid=3436 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:22.916000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 01:59:22.925000 audit[3536]: AVC avc: denied { bpf } for pid=3536 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.925000 audit[3536]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff1f830500 a2=28 a3=0 items=0 ppid=3436 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:22.925000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 01:59:22.925000 audit[3536]: AVC avc: denied { bpf } for pid=3536 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.925000 audit[3536]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff1f830530 a2=28 a3=0 items=0 ppid=3436 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:22.925000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 01:59:22.925000 audit[3536]: AVC avc: denied { bpf } for pid=3536 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.925000 audit[3536]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff1f830440 a2=28 a3=0 items=0 ppid=3436 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:22.925000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 01:59:22.925000 audit[3536]: AVC avc: denied { bpf } for pid=3536 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.925000 audit[3536]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff1f830550 a2=28 a3=0 items=0 ppid=3436 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:22.925000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 01:59:22.925000 audit[3536]: AVC avc: denied { bpf } for pid=3536 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.925000 audit[3536]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff1f830530 a2=28 a3=0 items=0 ppid=3436 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:22.925000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 01:59:22.925000 audit[3536]: AVC avc: denied { bpf } for pid=3536 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.925000 audit[3536]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff1f830520 a2=28 a3=0 items=0 ppid=3436 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:22.925000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 01:59:22.925000 audit[3536]: AVC avc: denied { bpf } for pid=3536 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.925000 audit[3536]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff1f830550 a2=28 a3=0 items=0 ppid=3436 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:22.925000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 01:59:22.925000 audit[3536]: AVC avc: denied { bpf } for pid=3536 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.925000 audit[3536]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff1f830530 a2=28 a3=0 items=0 ppid=3436 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:22.925000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 01:59:22.925000 audit[3536]: AVC avc: denied { bpf } for pid=3536 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.925000 audit[3536]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff1f830550 a2=28 a3=0 items=0 ppid=3436 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:22.925000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 01:59:22.925000 audit[3536]: AVC avc: denied { bpf } for pid=3536 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.925000 audit[3536]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff1f830520 a2=28 a3=0 items=0 ppid=3436 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:22.925000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 01:59:22.925000 audit[3536]: AVC avc: denied { bpf } for pid=3536 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.925000 audit[3536]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff1f830590 a2=28 a3=0 items=0 ppid=3436 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:22.925000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 01:59:22.925000 audit[3536]: AVC avc: denied { perfmon } for pid=3536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.925000 audit[3536]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fff1f830340 a2=50 a3=1 items=0 ppid=3436 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:22.925000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 01:59:22.925000 audit[3536]: AVC avc: denied { bpf } for pid=3536 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.925000 audit[3536]: AVC avc: denied { bpf } for pid=3536 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.925000 audit[3536]: AVC avc: denied { perfmon } for pid=3536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.925000 audit[3536]: AVC avc: denied { perfmon } for pid=3536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.925000 audit[3536]: AVC avc: denied { perfmon } for pid=3536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.925000 audit[3536]: AVC avc: denied { perfmon } for pid=3536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.925000 audit[3536]: AVC avc: denied { perfmon } for pid=3536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.925000 audit[3536]: AVC avc: denied { bpf } for pid=3536 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.925000 audit[3536]: AVC avc: denied { bpf } for pid=3536 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.925000 audit: BPF prog-id=14 op=LOAD Dec 13 01:59:22.925000 audit[3536]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff1f830340 a2=94 a3=5 items=0 ppid=3436 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:22.925000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 01:59:22.926000 audit: BPF prog-id=14 op=UNLOAD Dec 13 01:59:22.926000 audit[3536]: AVC avc: denied { perfmon } for pid=3536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.926000 audit[3536]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fff1f8303f0 a2=50 a3=1 items=0 ppid=3436 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:22.926000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 01:59:22.926000 audit[3536]: AVC avc: denied { bpf } for pid=3536 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.926000 audit[3536]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7fff1f830510 a2=4 a3=38 items=0 ppid=3436 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:22.926000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 01:59:22.926000 audit[3536]: AVC avc: denied { bpf } for pid=3536 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.926000 audit[3536]: AVC avc: denied { bpf } for pid=3536 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.926000 audit[3536]: AVC avc: denied { perfmon } for pid=3536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.926000 audit[3536]: AVC avc: denied { bpf } for pid=3536 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.926000 audit[3536]: AVC avc: denied { perfmon } for pid=3536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.926000 audit[3536]: AVC avc: denied { perfmon } for pid=3536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.926000 audit[3536]: AVC avc: denied { perfmon } for pid=3536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.926000 audit[3536]: AVC avc: denied { perfmon } for pid=3536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.926000 audit[3536]: AVC avc: denied { perfmon } for pid=3536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.926000 audit[3536]: AVC avc: denied { bpf } for pid=3536 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.926000 audit[3536]: AVC avc: denied { confidentiality } for pid=3536 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 01:59:22.926000 audit[3536]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff1f830560 a2=94 a3=6 items=0 ppid=3436 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:22.926000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 01:59:22.926000 audit[3536]: AVC avc: denied { bpf } for pid=3536 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.926000 audit[3536]: AVC avc: denied { bpf } for pid=3536 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.926000 audit[3536]: AVC avc: denied { perfmon } for pid=3536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.926000 audit[3536]: AVC avc: denied { bpf } for pid=3536 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.926000 audit[3536]: AVC avc: denied { perfmon } for pid=3536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.926000 audit[3536]: AVC avc: denied { perfmon } for pid=3536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.926000 audit[3536]: AVC avc: denied { perfmon } for pid=3536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.926000 audit[3536]: AVC avc: denied { perfmon } for pid=3536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.926000 audit[3536]: AVC avc: denied { perfmon } for pid=3536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.926000 audit[3536]: AVC avc: denied { bpf } for pid=3536 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.926000 audit[3536]: AVC avc: denied { confidentiality } for pid=3536 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 01:59:22.926000 audit[3536]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff1f82fd10 a2=94 a3=83 items=0 ppid=3436 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:22.926000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 01:59:22.926000 audit[3536]: AVC avc: denied { bpf } for pid=3536 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.926000 audit[3536]: AVC avc: denied { bpf } for pid=3536 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.926000 audit[3536]: AVC avc: denied { perfmon } for pid=3536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.926000 audit[3536]: AVC avc: denied { bpf } for pid=3536 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.926000 audit[3536]: AVC avc: denied { perfmon } for pid=3536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.926000 audit[3536]: AVC avc: denied { perfmon } for pid=3536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.926000 audit[3536]: AVC avc: denied { perfmon } for pid=3536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.926000 audit[3536]: AVC avc: denied { perfmon } for pid=3536 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.926000 audit[3536]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff1f82fd10 a2=94 a3=83 items=0 ppid=3436 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:22.926000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 01:59:22.934000 audit[3540]: AVC avc: denied { bpf } for pid=3540 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.934000 audit[3540]: AVC avc: denied { bpf } for pid=3540 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.934000 audit[3540]: AVC avc: denied { perfmon } for pid=3540 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.934000 audit[3540]: AVC avc: denied { perfmon } for pid=3540 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.934000 audit[3540]: AVC avc: denied { perfmon } for pid=3540 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.934000 audit[3540]: AVC avc: denied { perfmon } for pid=3540 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.934000 audit[3540]: AVC avc: denied { perfmon } for pid=3540 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.934000 audit[3540]: AVC avc: denied { bpf } for pid=3540 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.934000 audit[3540]: AVC avc: denied { bpf } for pid=3540 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.934000 audit: BPF prog-id=15 op=LOAD Dec 13 01:59:22.934000 audit[3540]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff187f74b0 a2=98 a3=1999999999999999 items=0 ppid=3436 pid=3540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:22.934000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 01:59:22.934000 audit: BPF prog-id=15 op=UNLOAD Dec 13 01:59:22.934000 audit[3540]: AVC avc: denied { bpf } for pid=3540 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.934000 audit[3540]: AVC avc: denied { bpf } for pid=3540 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.934000 audit[3540]: AVC avc: denied { perfmon } for pid=3540 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.934000 audit[3540]: AVC avc: denied { perfmon } for pid=3540 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.934000 audit[3540]: AVC avc: denied { perfmon } for pid=3540 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.934000 audit[3540]: AVC avc: denied { perfmon } for pid=3540 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.934000 audit[3540]: AVC avc: denied { perfmon } for pid=3540 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.934000 audit[3540]: AVC avc: denied { bpf } for pid=3540 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.934000 audit[3540]: AVC avc: denied { bpf } for pid=3540 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.934000 audit: BPF prog-id=16 op=LOAD Dec 13 01:59:22.934000 audit[3540]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff187f7390 a2=74 a3=ffff items=0 ppid=3436 pid=3540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:22.934000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 01:59:22.935000 audit: BPF prog-id=16 op=UNLOAD Dec 13 01:59:22.935000 audit[3540]: AVC avc: denied { bpf } for pid=3540 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.935000 audit[3540]: AVC avc: denied { bpf } for pid=3540 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.935000 audit[3540]: AVC avc: denied { perfmon } for pid=3540 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.935000 audit[3540]: AVC avc: denied { perfmon } for pid=3540 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.935000 audit[3540]: AVC avc: denied { perfmon } for pid=3540 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.935000 audit[3540]: AVC avc: denied { perfmon } for pid=3540 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.935000 audit[3540]: AVC avc: denied { perfmon } for pid=3540 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.935000 audit[3540]: AVC avc: denied { bpf } for pid=3540 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.935000 audit[3540]: AVC avc: denied { bpf } for pid=3540 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.935000 audit: BPF prog-id=17 op=LOAD Dec 13 01:59:22.935000 audit[3540]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff187f73d0 a2=40 a3=7fff187f75b0 items=0 ppid=3436 pid=3540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:22.935000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 01:59:22.935000 audit: BPF prog-id=17 op=UNLOAD Dec 13 01:59:22.974915 systemd-networkd[1081]: vxlan.calico: Link UP Dec 13 01:59:22.974932 systemd-networkd[1081]: vxlan.calico: Gained carrier Dec 13 01:59:22.985000 audit[3569]: AVC avc: denied { bpf } for pid=3569 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.985000 audit[3569]: AVC avc: denied { bpf } for pid=3569 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.985000 audit[3569]: AVC avc: denied { perfmon } for pid=3569 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.985000 audit[3569]: AVC avc: denied { perfmon } for pid=3569 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.985000 audit[3569]: AVC avc: denied { perfmon } for pid=3569 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.985000 audit[3569]: AVC avc: denied { perfmon } for pid=3569 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.985000 audit[3569]: AVC avc: denied { perfmon } for pid=3569 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.985000 audit[3569]: AVC avc: denied { bpf } for pid=3569 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.985000 audit[3569]: AVC avc: denied { bpf } for pid=3569 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.985000 audit: BPF prog-id=18 op=LOAD Dec 13 01:59:22.985000 audit[3569]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff8eb3eb10 a2=98 a3=ffffffff items=0 ppid=3436 pid=3569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:22.985000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 01:59:22.986000 audit: BPF prog-id=18 op=UNLOAD Dec 13 01:59:22.986000 audit[3569]: AVC avc: denied { bpf } for pid=3569 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.986000 audit[3569]: AVC avc: denied { bpf } for pid=3569 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.986000 audit[3569]: AVC avc: denied { perfmon } for pid=3569 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.986000 audit[3569]: AVC avc: denied { perfmon } for pid=3569 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.986000 audit[3569]: AVC avc: denied { perfmon } for pid=3569 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.986000 audit[3569]: AVC avc: denied { perfmon } for pid=3569 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.986000 audit[3569]: AVC avc: denied { perfmon } for pid=3569 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.986000 audit[3569]: AVC avc: denied { bpf } for pid=3569 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.986000 audit[3569]: AVC avc: denied { bpf } for pid=3569 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.986000 audit: BPF prog-id=19 op=LOAD Dec 13 01:59:22.986000 audit[3569]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff8eb3e920 a2=74 a3=540051 items=0 ppid=3436 pid=3569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:22.986000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 01:59:22.986000 audit: BPF prog-id=19 op=UNLOAD Dec 13 01:59:22.986000 audit[3569]: AVC avc: denied { bpf } for pid=3569 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.986000 audit[3569]: AVC avc: denied { bpf } for pid=3569 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.986000 audit[3569]: AVC avc: denied { perfmon } for pid=3569 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.986000 audit[3569]: AVC avc: denied { perfmon } for pid=3569 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.986000 audit[3569]: AVC avc: denied { perfmon } for pid=3569 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.986000 audit[3569]: AVC avc: denied { perfmon } for pid=3569 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.986000 audit[3569]: AVC avc: denied { perfmon } for pid=3569 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.986000 audit[3569]: AVC avc: denied { bpf } for pid=3569 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.986000 audit[3569]: AVC avc: denied { bpf } for pid=3569 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.986000 audit: BPF prog-id=20 op=LOAD Dec 13 01:59:22.986000 audit[3569]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff8eb3e950 a2=94 a3=2 items=0 ppid=3436 pid=3569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:22.986000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 01:59:22.986000 audit: BPF prog-id=20 op=UNLOAD Dec 13 01:59:22.986000 audit[3569]: AVC avc: denied { bpf } for pid=3569 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.986000 audit[3569]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7fff8eb3e820 a2=28 a3=0 items=0 ppid=3436 pid=3569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:22.986000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 01:59:22.986000 audit[3569]: AVC avc: denied { bpf } for pid=3569 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.986000 audit[3569]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff8eb3e850 a2=28 a3=0 items=0 ppid=3436 pid=3569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:22.986000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 01:59:22.986000 audit[3569]: AVC avc: denied { bpf } for pid=3569 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.986000 audit[3569]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff8eb3e760 a2=28 a3=0 items=0 ppid=3436 pid=3569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:22.986000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 01:59:22.986000 audit[3569]: AVC avc: denied { bpf } for pid=3569 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.986000 audit[3569]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7fff8eb3e870 a2=28 a3=0 items=0 ppid=3436 pid=3569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:22.986000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 01:59:22.986000 audit[3569]: AVC avc: denied { bpf } for pid=3569 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.986000 audit[3569]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7fff8eb3e850 a2=28 a3=0 items=0 ppid=3436 pid=3569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:22.986000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 01:59:22.986000 audit[3569]: AVC avc: denied { bpf } for pid=3569 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.986000 audit[3569]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7fff8eb3e840 a2=28 a3=0 items=0 ppid=3436 pid=3569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:22.986000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 01:59:22.986000 audit[3569]: AVC avc: denied { bpf } for pid=3569 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.986000 audit[3569]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7fff8eb3e870 a2=28 a3=0 items=0 ppid=3436 pid=3569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:22.986000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 01:59:22.986000 audit[3569]: AVC avc: denied { bpf } for pid=3569 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.986000 audit[3569]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff8eb3e850 a2=28 a3=0 items=0 ppid=3436 pid=3569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:22.986000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 01:59:22.986000 audit[3569]: AVC avc: denied { bpf } for pid=3569 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.986000 audit[3569]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff8eb3e870 a2=28 a3=0 items=0 ppid=3436 pid=3569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:22.986000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 01:59:22.986000 audit[3569]: AVC avc: denied { bpf } for pid=3569 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.986000 audit[3569]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff8eb3e840 a2=28 a3=0 items=0 ppid=3436 pid=3569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:22.986000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 01:59:22.986000 audit[3569]: AVC avc: denied { bpf } for pid=3569 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.986000 audit[3569]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7fff8eb3e8b0 a2=28 a3=0 items=0 ppid=3436 pid=3569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:22.986000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 01:59:22.986000 audit[3569]: AVC avc: denied { bpf } for pid=3569 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.986000 audit[3569]: AVC avc: denied { bpf } for pid=3569 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.986000 audit[3569]: AVC avc: denied { perfmon } for pid=3569 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.986000 audit[3569]: AVC avc: denied { perfmon } for pid=3569 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.986000 audit[3569]: AVC avc: denied { perfmon } for pid=3569 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.986000 audit[3569]: AVC avc: denied { perfmon } for pid=3569 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.986000 audit[3569]: AVC avc: denied { perfmon } for pid=3569 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.986000 audit[3569]: AVC avc: denied { bpf } for pid=3569 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.986000 audit[3569]: AVC avc: denied { bpf } for pid=3569 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.986000 audit: BPF prog-id=21 op=LOAD Dec 13 01:59:22.986000 audit[3569]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff8eb3e720 a2=40 a3=0 items=0 ppid=3436 pid=3569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:22.986000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 01:59:22.986000 audit: BPF prog-id=21 op=UNLOAD Dec 13 01:59:22.987000 audit[3569]: AVC avc: denied { bpf } for pid=3569 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.987000 audit[3569]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7fff8eb3e710 a2=50 a3=2800 items=0 ppid=3436 pid=3569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:22.987000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 01:59:22.987000 audit[3569]: AVC avc: denied { bpf } for pid=3569 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.987000 audit[3569]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7fff8eb3e710 a2=50 a3=2800 items=0 ppid=3436 pid=3569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:22.987000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 01:59:22.987000 audit[3569]: AVC avc: denied { bpf } for pid=3569 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.987000 audit[3569]: AVC avc: denied { bpf } for pid=3569 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.987000 audit[3569]: AVC avc: denied { bpf } for pid=3569 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.987000 audit[3569]: AVC avc: denied { perfmon } for pid=3569 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.987000 audit[3569]: AVC avc: denied { perfmon } for pid=3569 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.987000 audit[3569]: AVC avc: denied { perfmon } for pid=3569 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.987000 audit[3569]: AVC avc: denied { perfmon } for pid=3569 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.987000 audit[3569]: AVC avc: denied { perfmon } for pid=3569 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.987000 audit[3569]: AVC avc: denied { bpf } for pid=3569 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.987000 audit[3569]: AVC avc: denied { bpf } for pid=3569 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.987000 audit: BPF prog-id=22 op=LOAD Dec 13 01:59:22.987000 audit[3569]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff8eb3df30 a2=94 a3=2 items=0 ppid=3436 pid=3569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:22.987000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 01:59:22.987000 audit: BPF prog-id=22 op=UNLOAD Dec 13 01:59:22.987000 audit[3569]: AVC avc: denied { bpf } for pid=3569 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.987000 audit[3569]: AVC avc: denied { bpf } for pid=3569 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.987000 audit[3569]: AVC avc: denied { bpf } for pid=3569 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.987000 audit[3569]: AVC avc: denied { perfmon } for pid=3569 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.987000 audit[3569]: AVC avc: denied { perfmon } for pid=3569 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.987000 audit[3569]: AVC avc: denied { perfmon } for pid=3569 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.987000 audit[3569]: AVC avc: denied { perfmon } for pid=3569 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.987000 audit[3569]: AVC avc: denied { perfmon } for pid=3569 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.987000 audit[3569]: AVC avc: denied { bpf } for pid=3569 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.987000 audit[3569]: AVC avc: denied { bpf } for pid=3569 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.987000 audit: BPF prog-id=23 op=LOAD Dec 13 01:59:22.987000 audit[3569]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff8eb3e030 a2=94 a3=2d items=0 ppid=3436 pid=3569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:22.987000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 01:59:22.990000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.990000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.990000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.990000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.990000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.990000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.990000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.990000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.990000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.990000 audit: BPF prog-id=24 op=LOAD Dec 13 01:59:22.990000 audit[3573]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc8371c9f0 a2=98 a3=0 items=0 ppid=3436 pid=3573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:22.990000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 01:59:22.990000 audit: BPF prog-id=24 op=UNLOAD Dec 13 01:59:22.990000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.990000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.990000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.990000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.990000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.990000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.990000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.990000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.990000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.990000 audit: BPF prog-id=25 op=LOAD Dec 13 01:59:22.990000 audit[3573]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc8371c7d0 a2=74 a3=540051 items=0 ppid=3436 pid=3573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:22.990000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 01:59:22.990000 audit: BPF prog-id=25 op=UNLOAD Dec 13 01:59:22.990000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.990000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.990000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.990000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.990000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.990000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.990000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.990000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.990000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:22.990000 audit: BPF prog-id=26 op=LOAD Dec 13 01:59:22.990000 audit[3573]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc8371c800 a2=94 a3=2 items=0 ppid=3436 pid=3573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:22.990000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 01:59:22.990000 audit: BPF prog-id=26 op=UNLOAD Dec 13 01:59:23.097000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.097000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.097000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.097000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.097000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.097000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.097000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.097000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.097000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.097000 audit: BPF prog-id=27 op=LOAD Dec 13 01:59:23.097000 audit[3573]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc8371c6c0 a2=40 a3=1 items=0 ppid=3436 pid=3573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:23.097000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 01:59:23.097000 audit: BPF prog-id=27 op=UNLOAD Dec 13 01:59:23.097000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.097000 audit[3573]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffc8371c790 a2=50 a3=7ffc8371c870 items=0 ppid=3436 pid=3573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:23.097000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 01:59:23.105000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.105000 audit[3573]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc8371c6d0 a2=28 a3=0 items=0 ppid=3436 pid=3573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:23.105000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 01:59:23.105000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.105000 audit[3573]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc8371c700 a2=28 a3=0 items=0 ppid=3436 pid=3573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:23.105000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 01:59:23.105000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.105000 audit[3573]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc8371c610 a2=28 a3=0 items=0 ppid=3436 pid=3573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:23.105000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 01:59:23.105000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.105000 audit[3573]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc8371c720 a2=28 a3=0 items=0 ppid=3436 pid=3573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:23.105000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 01:59:23.105000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.105000 audit[3573]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc8371c700 a2=28 a3=0 items=0 ppid=3436 pid=3573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:23.105000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 01:59:23.105000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.105000 audit[3573]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc8371c6f0 a2=28 a3=0 items=0 ppid=3436 pid=3573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:23.105000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 01:59:23.105000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.105000 audit[3573]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc8371c720 a2=28 a3=0 items=0 ppid=3436 pid=3573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:23.105000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 01:59:23.105000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.105000 audit[3573]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc8371c700 a2=28 a3=0 items=0 ppid=3436 pid=3573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:23.105000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 01:59:23.105000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.105000 audit[3573]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc8371c720 a2=28 a3=0 items=0 ppid=3436 pid=3573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:23.105000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 01:59:23.105000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.105000 audit[3573]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc8371c6f0 a2=28 a3=0 items=0 ppid=3436 pid=3573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:23.105000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 01:59:23.105000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.105000 audit[3573]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc8371c760 a2=28 a3=0 items=0 ppid=3436 pid=3573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:23.105000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 01:59:23.105000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.105000 audit[3573]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffc8371c510 a2=50 a3=1 items=0 ppid=3436 pid=3573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:23.105000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 01:59:23.105000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.105000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.105000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.105000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.105000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.105000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.105000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.105000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.105000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.105000 audit: BPF prog-id=28 op=LOAD Dec 13 01:59:23.105000 audit[3573]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc8371c510 a2=94 a3=5 items=0 ppid=3436 pid=3573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:23.105000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 01:59:23.105000 audit: BPF prog-id=28 op=UNLOAD Dec 13 01:59:23.105000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.105000 audit[3573]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffc8371c5c0 a2=50 a3=1 items=0 ppid=3436 pid=3573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:23.105000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 01:59:23.105000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.105000 audit[3573]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffc8371c6e0 a2=4 a3=38 items=0 ppid=3436 pid=3573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:23.105000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 01:59:23.105000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.105000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.105000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.105000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.105000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.105000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.105000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.105000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.105000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.105000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.105000 audit[3573]: AVC avc: denied { confidentiality } for pid=3573 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 01:59:23.105000 audit[3573]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffc8371c730 a2=94 a3=6 items=0 ppid=3436 pid=3573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:23.105000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 01:59:23.105000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.105000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.105000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.105000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.105000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.105000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.105000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.105000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.105000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.105000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.105000 audit[3573]: AVC avc: denied { confidentiality } for pid=3573 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 01:59:23.105000 audit[3573]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffc8371bee0 a2=94 a3=83 items=0 ppid=3436 pid=3573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:23.105000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 01:59:23.105000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.105000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.105000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.105000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.105000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.105000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.105000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.105000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.105000 audit[3573]: AVC avc: denied { perfmon } for pid=3573 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.105000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.105000 audit[3573]: AVC avc: denied { confidentiality } for pid=3573 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 01:59:23.105000 audit[3573]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffc8371bee0 a2=94 a3=83 items=0 ppid=3436 pid=3573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:23.105000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 01:59:23.106000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.106000 audit[3573]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffc8371d920 a2=10 a3=208 items=0 ppid=3436 pid=3573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:23.106000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 01:59:23.106000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.106000 audit[3573]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffc8371d7c0 a2=10 a3=3 items=0 ppid=3436 pid=3573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:23.106000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 01:59:23.106000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.106000 audit[3573]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffc8371d760 a2=10 a3=3 items=0 ppid=3436 pid=3573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:23.106000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 01:59:23.106000 audit[3573]: AVC avc: denied { bpf } for pid=3573 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 01:59:23.106000 audit[3573]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffc8371d760 a2=10 a3=7 items=0 ppid=3436 pid=3573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:23.106000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 01:59:23.112000 audit: BPF prog-id=23 op=UNLOAD Dec 13 01:59:23.159000 audit[3600]: NETFILTER_CFG table=mangle:97 family=2 entries=16 op=nft_register_chain pid=3600 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 01:59:23.159000 audit[3600]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7fff535ab890 a2=0 a3=7fff535ab87c items=0 ppid=3436 pid=3600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:23.159000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 01:59:23.168000 audit[3599]: NETFILTER_CFG table=raw:98 family=2 entries=21 op=nft_register_chain pid=3599 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 01:59:23.168000 audit[3599]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffdbf485300 a2=0 a3=7ffdbf4852ec items=0 ppid=3436 pid=3599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:23.168000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 01:59:23.169000 audit[3601]: NETFILTER_CFG table=filter:99 family=2 entries=39 op=nft_register_chain pid=3601 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 01:59:23.169000 audit[3601]: SYSCALL arch=c000003e syscall=46 success=yes exit=18968 a0=3 a1=7ffd02ff5810 a2=0 a3=7ffd02ff57fc items=0 ppid=3436 pid=3601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:23.169000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 01:59:23.169000 audit[3604]: NETFILTER_CFG table=nat:100 family=2 entries=15 op=nft_register_chain pid=3604 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 01:59:23.169000 audit[3604]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffe749194c0 a2=0 a3=0 items=0 ppid=3436 pid=3604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:23.169000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 01:59:23.619858 systemd[1]: Started sshd@9-10.0.0.48:22-10.0.0.1:56196.service. Dec 13 01:59:23.620875 kernel: kauditd_printk_skb: 522 callbacks suppressed Dec 13 01:59:23.620912 kernel: audit: type=1130 audit(1734055163.619:408): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.48:22-10.0.0.1:56196 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:23.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.48:22-10.0.0.1:56196 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:24.218899 systemd-networkd[1081]: vxlan.calico: Gained IPv6LL Dec 13 01:59:24.616000 audit[3609]: USER_ACCT pid=3609 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:24.617656 sshd[3609]: Accepted publickey for core from 10.0.0.1 port 56196 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 01:59:24.619890 sshd[3609]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:59:24.618000 audit[3609]: CRED_ACQ pid=3609 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:24.624092 systemd-logind[1291]: New session 10 of user core. Dec 13 01:59:24.624806 systemd[1]: Started session-10.scope. Dec 13 01:59:24.626296 kernel: audit: type=1101 audit(1734055164.616:409): pid=3609 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:24.626337 kernel: audit: type=1103 audit(1734055164.618:410): pid=3609 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:24.626374 kernel: audit: type=1006 audit(1734055164.618:411): pid=3609 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Dec 13 01:59:24.618000 audit[3609]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd8924ec30 a2=3 a3=0 items=0 ppid=1 pid=3609 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:24.632567 kernel: audit: type=1300 audit(1734055164.618:411): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd8924ec30 a2=3 a3=0 items=0 ppid=1 pid=3609 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:24.632672 kernel: audit: type=1327 audit(1734055164.618:411): proctitle=737368643A20636F7265205B707269765D Dec 13 01:59:24.618000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 01:59:24.633880 kernel: audit: type=1105 audit(1734055164.629:412): pid=3609 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:24.629000 audit[3609]: USER_START pid=3609 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:24.630000 audit[3616]: CRED_ACQ pid=3616 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:24.641881 kernel: audit: type=1103 audit(1734055164.630:413): pid=3616 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:24.736058 sshd[3609]: pam_unix(sshd:session): session closed for user core Dec 13 01:59:24.736000 audit[3609]: USER_END pid=3609 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:24.738543 systemd[1]: sshd@9-10.0.0.48:22-10.0.0.1:56196.service: Deactivated successfully. Dec 13 01:59:24.739780 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:59:24.739850 systemd-logind[1291]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:59:24.740918 systemd-logind[1291]: Removed session 10. Dec 13 01:59:24.736000 audit[3609]: CRED_DISP pid=3609 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:24.744882 kernel: audit: type=1106 audit(1734055164.736:414): pid=3609 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:24.744941 kernel: audit: type=1104 audit(1734055164.736:415): pid=3609 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:24.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.48:22-10.0.0.1:56196 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:24.766009 env[1308]: time="2024-12-13T01:59:24.765965889Z" level=info msg="StopPodSandbox for \"8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa\"" Dec 13 01:59:24.766442 env[1308]: time="2024-12-13T01:59:24.766421374Z" level=info msg="StopPodSandbox for \"d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836\"" Dec 13 01:59:24.890103 env[1308]: 2024-12-13 01:59:24.820 [INFO][3668] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836" Dec 13 01:59:24.890103 env[1308]: 2024-12-13 01:59:24.821 [INFO][3668] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836" iface="eth0" netns="/var/run/netns/cni-5e71be65-4a11-66bc-d4e4-0a86cf8b8948" Dec 13 01:59:24.890103 env[1308]: 2024-12-13 01:59:24.821 [INFO][3668] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836" iface="eth0" netns="/var/run/netns/cni-5e71be65-4a11-66bc-d4e4-0a86cf8b8948" Dec 13 01:59:24.890103 env[1308]: 2024-12-13 01:59:24.822 [INFO][3668] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836" iface="eth0" netns="/var/run/netns/cni-5e71be65-4a11-66bc-d4e4-0a86cf8b8948" Dec 13 01:59:24.890103 env[1308]: 2024-12-13 01:59:24.822 [INFO][3668] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836" Dec 13 01:59:24.890103 env[1308]: 2024-12-13 01:59:24.822 [INFO][3668] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836" Dec 13 01:59:24.890103 env[1308]: 2024-12-13 01:59:24.877 [INFO][3679] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836" HandleID="k8s-pod-network.d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836" Workload="localhost-k8s-calico--kube--controllers--6ff7c669bd--mkmgc-eth0" Dec 13 01:59:24.890103 env[1308]: 2024-12-13 01:59:24.878 [INFO][3679] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:59:24.890103 env[1308]: 2024-12-13 01:59:24.878 [INFO][3679] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:59:24.890103 env[1308]: 2024-12-13 01:59:24.885 [WARNING][3679] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836" HandleID="k8s-pod-network.d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836" Workload="localhost-k8s-calico--kube--controllers--6ff7c669bd--mkmgc-eth0" Dec 13 01:59:24.890103 env[1308]: 2024-12-13 01:59:24.885 [INFO][3679] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836" HandleID="k8s-pod-network.d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836" Workload="localhost-k8s-calico--kube--controllers--6ff7c669bd--mkmgc-eth0" Dec 13 01:59:24.890103 env[1308]: 2024-12-13 01:59:24.886 [INFO][3679] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:59:24.890103 env[1308]: 2024-12-13 01:59:24.888 [INFO][3668] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836" Dec 13 01:59:24.891969 env[1308]: time="2024-12-13T01:59:24.891903758Z" level=info msg="TearDown network for sandbox \"d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836\" successfully" Dec 13 01:59:24.891969 env[1308]: time="2024-12-13T01:59:24.891963671Z" level=info msg="StopPodSandbox for \"d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836\" returns successfully" Dec 13 01:59:24.893163 env[1308]: time="2024-12-13T01:59:24.892793548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6ff7c669bd-mkmgc,Uid:4746d340-1c7d-4392-8db5-c68575618d26,Namespace:calico-system,Attempt:1,}" Dec 13 01:59:24.894938 systemd[1]: run-netns-cni\x2d5e71be65\x2d4a11\x2d66bc\x2dd4e4\x2d0a86cf8b8948.mount: Deactivated successfully. Dec 13 01:59:24.943654 env[1308]: 2024-12-13 01:59:24.820 [INFO][3659] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa" Dec 13 01:59:24.943654 env[1308]: 2024-12-13 01:59:24.821 [INFO][3659] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa" iface="eth0" netns="/var/run/netns/cni-6fbe443d-379b-ecde-9d7b-07f43f45db08" Dec 13 01:59:24.943654 env[1308]: 2024-12-13 01:59:24.821 [INFO][3659] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa" iface="eth0" netns="/var/run/netns/cni-6fbe443d-379b-ecde-9d7b-07f43f45db08" Dec 13 01:59:24.943654 env[1308]: 2024-12-13 01:59:24.822 [INFO][3659] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa" iface="eth0" netns="/var/run/netns/cni-6fbe443d-379b-ecde-9d7b-07f43f45db08" Dec 13 01:59:24.943654 env[1308]: 2024-12-13 01:59:24.822 [INFO][3659] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa" Dec 13 01:59:24.943654 env[1308]: 2024-12-13 01:59:24.822 [INFO][3659] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa" Dec 13 01:59:24.943654 env[1308]: 2024-12-13 01:59:24.877 [INFO][3678] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa" HandleID="k8s-pod-network.8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa" Workload="localhost-k8s-csi--node--driver--z69kl-eth0" Dec 13 01:59:24.943654 env[1308]: 2024-12-13 01:59:24.878 [INFO][3678] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:59:24.943654 env[1308]: 2024-12-13 01:59:24.886 [INFO][3678] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:59:24.943654 env[1308]: 2024-12-13 01:59:24.893 [WARNING][3678] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa" HandleID="k8s-pod-network.8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa" Workload="localhost-k8s-csi--node--driver--z69kl-eth0" Dec 13 01:59:24.943654 env[1308]: 2024-12-13 01:59:24.893 [INFO][3678] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa" HandleID="k8s-pod-network.8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa" Workload="localhost-k8s-csi--node--driver--z69kl-eth0" Dec 13 01:59:24.943654 env[1308]: 2024-12-13 01:59:24.901 [INFO][3678] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:59:24.943654 env[1308]: 2024-12-13 01:59:24.941 [INFO][3659] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa" Dec 13 01:59:24.944121 env[1308]: time="2024-12-13T01:59:24.943845268Z" level=info msg="TearDown network for sandbox \"8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa\" successfully" Dec 13 01:59:24.944121 env[1308]: time="2024-12-13T01:59:24.943882788Z" level=info msg="StopPodSandbox for \"8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa\" returns successfully" Dec 13 01:59:24.944610 env[1308]: time="2024-12-13T01:59:24.944576881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z69kl,Uid:7937f569-a24d-4eec-b55c-c7674aa42251,Namespace:calico-system,Attempt:1,}" Dec 13 01:59:24.946921 systemd[1]: run-netns-cni\x2d6fbe443d\x2d379b\x2decde\x2d9d7b\x2d07f43f45db08.mount: Deactivated successfully. Dec 13 01:59:25.766602 env[1308]: time="2024-12-13T01:59:25.766543355Z" level=info msg="StopPodSandbox for \"38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d\"" Dec 13 01:59:25.875163 systemd-networkd[1081]: cali26be98e69f9: Link UP Dec 13 01:59:25.877603 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 01:59:25.877844 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali26be98e69f9: link becomes ready Dec 13 01:59:25.877999 systemd-networkd[1081]: cali26be98e69f9: Gained carrier Dec 13 01:59:25.898638 env[1308]: 2024-12-13 01:59:25.780 [INFO][3695] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--z69kl-eth0 csi-node-driver- calico-system 7937f569-a24d-4eec-b55c-c7674aa42251 854 0 2024-12-13 01:59:00 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-z69kl eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali26be98e69f9 [] []}} ContainerID="d76f24044d3ac055e86eaf60d6ca255137c59e5f97cea51f6ae25b42a366acf3" Namespace="calico-system" Pod="csi-node-driver-z69kl" WorkloadEndpoint="localhost-k8s-csi--node--driver--z69kl-" Dec 13 01:59:25.898638 env[1308]: 2024-12-13 01:59:25.780 [INFO][3695] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d76f24044d3ac055e86eaf60d6ca255137c59e5f97cea51f6ae25b42a366acf3" Namespace="calico-system" Pod="csi-node-driver-z69kl" WorkloadEndpoint="localhost-k8s-csi--node--driver--z69kl-eth0" Dec 13 01:59:25.898638 env[1308]: 2024-12-13 01:59:25.830 [INFO][3746] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d76f24044d3ac055e86eaf60d6ca255137c59e5f97cea51f6ae25b42a366acf3" HandleID="k8s-pod-network.d76f24044d3ac055e86eaf60d6ca255137c59e5f97cea51f6ae25b42a366acf3" Workload="localhost-k8s-csi--node--driver--z69kl-eth0" Dec 13 01:59:25.898638 env[1308]: 2024-12-13 01:59:25.839 [INFO][3746] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d76f24044d3ac055e86eaf60d6ca255137c59e5f97cea51f6ae25b42a366acf3" HandleID="k8s-pod-network.d76f24044d3ac055e86eaf60d6ca255137c59e5f97cea51f6ae25b42a366acf3" Workload="localhost-k8s-csi--node--driver--z69kl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005b39e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-z69kl", "timestamp":"2024-12-13 01:59:25.830384224 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:59:25.898638 env[1308]: 2024-12-13 01:59:25.839 [INFO][3746] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:59:25.898638 env[1308]: 2024-12-13 01:59:25.840 [INFO][3746] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:59:25.898638 env[1308]: 2024-12-13 01:59:25.840 [INFO][3746] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:59:25.898638 env[1308]: 2024-12-13 01:59:25.842 [INFO][3746] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d76f24044d3ac055e86eaf60d6ca255137c59e5f97cea51f6ae25b42a366acf3" host="localhost" Dec 13 01:59:25.898638 env[1308]: 2024-12-13 01:59:25.847 [INFO][3746] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:59:25.898638 env[1308]: 2024-12-13 01:59:25.852 [INFO][3746] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:59:25.898638 env[1308]: 2024-12-13 01:59:25.853 [INFO][3746] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:59:25.898638 env[1308]: 2024-12-13 01:59:25.855 [INFO][3746] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:59:25.898638 env[1308]: 2024-12-13 01:59:25.855 [INFO][3746] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d76f24044d3ac055e86eaf60d6ca255137c59e5f97cea51f6ae25b42a366acf3" host="localhost" Dec 13 01:59:25.898638 env[1308]: 2024-12-13 01:59:25.856 [INFO][3746] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d76f24044d3ac055e86eaf60d6ca255137c59e5f97cea51f6ae25b42a366acf3 Dec 13 01:59:25.898638 env[1308]: 2024-12-13 01:59:25.860 [INFO][3746] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d76f24044d3ac055e86eaf60d6ca255137c59e5f97cea51f6ae25b42a366acf3" host="localhost" Dec 13 01:59:25.898638 env[1308]: 2024-12-13 01:59:25.867 [INFO][3746] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.d76f24044d3ac055e86eaf60d6ca255137c59e5f97cea51f6ae25b42a366acf3" host="localhost" Dec 13 01:59:25.898638 env[1308]: 2024-12-13 01:59:25.867 [INFO][3746] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.d76f24044d3ac055e86eaf60d6ca255137c59e5f97cea51f6ae25b42a366acf3" host="localhost" Dec 13 01:59:25.898638 env[1308]: 2024-12-13 01:59:25.867 [INFO][3746] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:59:25.898638 env[1308]: 2024-12-13 01:59:25.868 [INFO][3746] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="d76f24044d3ac055e86eaf60d6ca255137c59e5f97cea51f6ae25b42a366acf3" HandleID="k8s-pod-network.d76f24044d3ac055e86eaf60d6ca255137c59e5f97cea51f6ae25b42a366acf3" Workload="localhost-k8s-csi--node--driver--z69kl-eth0" Dec 13 01:59:25.899498 env[1308]: 2024-12-13 01:59:25.870 [INFO][3695] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d76f24044d3ac055e86eaf60d6ca255137c59e5f97cea51f6ae25b42a366acf3" Namespace="calico-system" Pod="csi-node-driver-z69kl" WorkloadEndpoint="localhost-k8s-csi--node--driver--z69kl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--z69kl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7937f569-a24d-4eec-b55c-c7674aa42251", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 59, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-z69kl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali26be98e69f9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:59:25.899498 env[1308]: 2024-12-13 01:59:25.871 [INFO][3695] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="d76f24044d3ac055e86eaf60d6ca255137c59e5f97cea51f6ae25b42a366acf3" Namespace="calico-system" Pod="csi-node-driver-z69kl" WorkloadEndpoint="localhost-k8s-csi--node--driver--z69kl-eth0" Dec 13 01:59:25.899498 env[1308]: 2024-12-13 01:59:25.871 [INFO][3695] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali26be98e69f9 ContainerID="d76f24044d3ac055e86eaf60d6ca255137c59e5f97cea51f6ae25b42a366acf3" Namespace="calico-system" Pod="csi-node-driver-z69kl" WorkloadEndpoint="localhost-k8s-csi--node--driver--z69kl-eth0" Dec 13 01:59:25.899498 env[1308]: 2024-12-13 01:59:25.878 [INFO][3695] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d76f24044d3ac055e86eaf60d6ca255137c59e5f97cea51f6ae25b42a366acf3" Namespace="calico-system" Pod="csi-node-driver-z69kl" WorkloadEndpoint="localhost-k8s-csi--node--driver--z69kl-eth0" Dec 13 01:59:25.899498 env[1308]: 2024-12-13 01:59:25.882 [INFO][3695] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d76f24044d3ac055e86eaf60d6ca255137c59e5f97cea51f6ae25b42a366acf3" Namespace="calico-system" Pod="csi-node-driver-z69kl" WorkloadEndpoint="localhost-k8s-csi--node--driver--z69kl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--z69kl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7937f569-a24d-4eec-b55c-c7674aa42251", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 59, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d76f24044d3ac055e86eaf60d6ca255137c59e5f97cea51f6ae25b42a366acf3", Pod:"csi-node-driver-z69kl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali26be98e69f9", MAC:"26:e4:f5:4b:5f:1a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:59:25.899498 env[1308]: 2024-12-13 01:59:25.893 [INFO][3695] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d76f24044d3ac055e86eaf60d6ca255137c59e5f97cea51f6ae25b42a366acf3" Namespace="calico-system" Pod="csi-node-driver-z69kl" WorkloadEndpoint="localhost-k8s-csi--node--driver--z69kl-eth0" Dec 13 01:59:25.913000 audit[3783]: NETFILTER_CFG table=filter:101 family=2 entries=34 op=nft_register_chain pid=3783 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 01:59:25.913000 audit[3783]: SYSCALL arch=c000003e syscall=46 success=yes exit=19148 a0=3 a1=7fff28fff080 a2=0 a3=7fff28fff06c items=0 ppid=3436 pid=3783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:25.913000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 01:59:25.930017 env[1308]: time="2024-12-13T01:59:25.929457965Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:59:25.930017 env[1308]: time="2024-12-13T01:59:25.929511695Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:59:25.930017 env[1308]: time="2024-12-13T01:59:25.929524579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:59:25.936448 env[1308]: time="2024-12-13T01:59:25.930000092Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d76f24044d3ac055e86eaf60d6ca255137c59e5f97cea51f6ae25b42a366acf3 pid=3791 runtime=io.containerd.runc.v2 Dec 13 01:59:25.940403 systemd-networkd[1081]: cali68cbb84f4c5: Link UP Dec 13 01:59:25.944320 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali68cbb84f4c5: link becomes ready Dec 13 01:59:25.944551 systemd-networkd[1081]: cali68cbb84f4c5: Gained carrier Dec 13 01:59:25.968305 systemd[1]: run-containerd-runc-k8s.io-d76f24044d3ac055e86eaf60d6ca255137c59e5f97cea51f6ae25b42a366acf3-runc.gbUYID.mount: Deactivated successfully. Dec 13 01:59:25.969385 env[1308]: 2024-12-13 01:59:25.834 [INFO][3739] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d" Dec 13 01:59:25.969385 env[1308]: 2024-12-13 01:59:25.834 [INFO][3739] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d" iface="eth0" netns="/var/run/netns/cni-75823aac-b16a-4f82-a25c-8c26630fd8f4" Dec 13 01:59:25.969385 env[1308]: 2024-12-13 01:59:25.835 [INFO][3739] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d" iface="eth0" netns="/var/run/netns/cni-75823aac-b16a-4f82-a25c-8c26630fd8f4" Dec 13 01:59:25.969385 env[1308]: 2024-12-13 01:59:25.835 [INFO][3739] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d" iface="eth0" netns="/var/run/netns/cni-75823aac-b16a-4f82-a25c-8c26630fd8f4" Dec 13 01:59:25.969385 env[1308]: 2024-12-13 01:59:25.835 [INFO][3739] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d" Dec 13 01:59:25.969385 env[1308]: 2024-12-13 01:59:25.835 [INFO][3739] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d" Dec 13 01:59:25.969385 env[1308]: 2024-12-13 01:59:25.864 [INFO][3761] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d" HandleID="k8s-pod-network.38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d" Workload="localhost-k8s-calico--apiserver--76c6f6c975--rhnlz-eth0" Dec 13 01:59:25.969385 env[1308]: 2024-12-13 01:59:25.864 [INFO][3761] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:59:25.969385 env[1308]: 2024-12-13 01:59:25.933 [INFO][3761] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:59:25.969385 env[1308]: 2024-12-13 01:59:25.941 [WARNING][3761] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d" HandleID="k8s-pod-network.38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d" Workload="localhost-k8s-calico--apiserver--76c6f6c975--rhnlz-eth0" Dec 13 01:59:25.969385 env[1308]: 2024-12-13 01:59:25.941 [INFO][3761] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d" HandleID="k8s-pod-network.38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d" Workload="localhost-k8s-calico--apiserver--76c6f6c975--rhnlz-eth0" Dec 13 01:59:25.969385 env[1308]: 2024-12-13 01:59:25.953 [INFO][3761] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:59:25.969385 env[1308]: 2024-12-13 01:59:25.963 [INFO][3739] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d" Dec 13 01:59:25.969882 env[1308]: time="2024-12-13T01:59:25.969448146Z" level=info msg="TearDown network for sandbox \"38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d\" successfully" Dec 13 01:59:25.969882 env[1308]: time="2024-12-13T01:59:25.969508600Z" level=info msg="StopPodSandbox for \"38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d\" returns successfully" Dec 13 01:59:25.970610 env[1308]: time="2024-12-13T01:59:25.970567067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76c6f6c975-rhnlz,Uid:486a846a-be07-4723-8e84-72e633e51630,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:59:25.972382 systemd[1]: run-netns-cni\x2d75823aac\x2db16a\x2d4f82\x2da25c\x2d8c26630fd8f4.mount: Deactivated successfully. Dec 13 01:59:25.977323 env[1308]: 2024-12-13 01:59:25.791 [INFO][3707] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6ff7c669bd--mkmgc-eth0 calico-kube-controllers-6ff7c669bd- calico-system 4746d340-1c7d-4392-8db5-c68575618d26 853 0 2024-12-13 01:59:00 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6ff7c669bd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6ff7c669bd-mkmgc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali68cbb84f4c5 [] []}} ContainerID="3f2b6191f84a72fdbafc6efe51af224edc248d09c2fcabeccbba94254e9cb061" Namespace="calico-system" Pod="calico-kube-controllers-6ff7c669bd-mkmgc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6ff7c669bd--mkmgc-" Dec 13 01:59:25.977323 env[1308]: 2024-12-13 01:59:25.791 [INFO][3707] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3f2b6191f84a72fdbafc6efe51af224edc248d09c2fcabeccbba94254e9cb061" Namespace="calico-system" Pod="calico-kube-controllers-6ff7c669bd-mkmgc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6ff7c669bd--mkmgc-eth0" Dec 13 01:59:25.977323 env[1308]: 2024-12-13 01:59:25.828 [INFO][3751] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3f2b6191f84a72fdbafc6efe51af224edc248d09c2fcabeccbba94254e9cb061" HandleID="k8s-pod-network.3f2b6191f84a72fdbafc6efe51af224edc248d09c2fcabeccbba94254e9cb061" Workload="localhost-k8s-calico--kube--controllers--6ff7c669bd--mkmgc-eth0" Dec 13 01:59:25.977323 env[1308]: 2024-12-13 01:59:25.841 [INFO][3751] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3f2b6191f84a72fdbafc6efe51af224edc248d09c2fcabeccbba94254e9cb061" HandleID="k8s-pod-network.3f2b6191f84a72fdbafc6efe51af224edc248d09c2fcabeccbba94254e9cb061" Workload="localhost-k8s-calico--kube--controllers--6ff7c669bd--mkmgc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000334ca0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6ff7c669bd-mkmgc", "timestamp":"2024-12-13 01:59:25.828271779 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:59:25.977323 env[1308]: 2024-12-13 01:59:25.841 [INFO][3751] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:59:25.977323 env[1308]: 2024-12-13 01:59:25.867 [INFO][3751] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:59:25.977323 env[1308]: 2024-12-13 01:59:25.867 [INFO][3751] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:59:25.977323 env[1308]: 2024-12-13 01:59:25.870 [INFO][3751] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3f2b6191f84a72fdbafc6efe51af224edc248d09c2fcabeccbba94254e9cb061" host="localhost" Dec 13 01:59:25.977323 env[1308]: 2024-12-13 01:59:25.879 [INFO][3751] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:59:25.977323 env[1308]: 2024-12-13 01:59:25.885 [INFO][3751] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:59:25.977323 env[1308]: 2024-12-13 01:59:25.888 [INFO][3751] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:59:25.977323 env[1308]: 2024-12-13 01:59:25.902 [INFO][3751] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:59:25.977323 env[1308]: 2024-12-13 01:59:25.902 [INFO][3751] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3f2b6191f84a72fdbafc6efe51af224edc248d09c2fcabeccbba94254e9cb061" host="localhost" Dec 13 01:59:25.977323 env[1308]: 2024-12-13 01:59:25.905 [INFO][3751] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3f2b6191f84a72fdbafc6efe51af224edc248d09c2fcabeccbba94254e9cb061 Dec 13 01:59:25.977323 env[1308]: 2024-12-13 01:59:25.920 [INFO][3751] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3f2b6191f84a72fdbafc6efe51af224edc248d09c2fcabeccbba94254e9cb061" host="localhost" Dec 13 01:59:25.977323 env[1308]: 2024-12-13 01:59:25.933 [INFO][3751] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.3f2b6191f84a72fdbafc6efe51af224edc248d09c2fcabeccbba94254e9cb061" host="localhost" Dec 13 01:59:25.977323 env[1308]: 2024-12-13 01:59:25.933 [INFO][3751] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.3f2b6191f84a72fdbafc6efe51af224edc248d09c2fcabeccbba94254e9cb061" host="localhost" Dec 13 01:59:25.977323 env[1308]: 2024-12-13 01:59:25.933 [INFO][3751] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:59:25.977323 env[1308]: 2024-12-13 01:59:25.933 [INFO][3751] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="3f2b6191f84a72fdbafc6efe51af224edc248d09c2fcabeccbba94254e9cb061" HandleID="k8s-pod-network.3f2b6191f84a72fdbafc6efe51af224edc248d09c2fcabeccbba94254e9cb061" Workload="localhost-k8s-calico--kube--controllers--6ff7c669bd--mkmgc-eth0" Dec 13 01:59:25.978134 env[1308]: 2024-12-13 01:59:25.937 [INFO][3707] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3f2b6191f84a72fdbafc6efe51af224edc248d09c2fcabeccbba94254e9cb061" Namespace="calico-system" Pod="calico-kube-controllers-6ff7c669bd-mkmgc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6ff7c669bd--mkmgc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6ff7c669bd--mkmgc-eth0", GenerateName:"calico-kube-controllers-6ff7c669bd-", Namespace:"calico-system", SelfLink:"", UID:"4746d340-1c7d-4392-8db5-c68575618d26", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 59, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6ff7c669bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6ff7c669bd-mkmgc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali68cbb84f4c5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:59:25.978134 env[1308]: 2024-12-13 01:59:25.937 [INFO][3707] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="3f2b6191f84a72fdbafc6efe51af224edc248d09c2fcabeccbba94254e9cb061" Namespace="calico-system" Pod="calico-kube-controllers-6ff7c669bd-mkmgc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6ff7c669bd--mkmgc-eth0" Dec 13 01:59:25.978134 env[1308]: 2024-12-13 01:59:25.937 [INFO][3707] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali68cbb84f4c5 ContainerID="3f2b6191f84a72fdbafc6efe51af224edc248d09c2fcabeccbba94254e9cb061" Namespace="calico-system" Pod="calico-kube-controllers-6ff7c669bd-mkmgc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6ff7c669bd--mkmgc-eth0" Dec 13 01:59:25.978134 env[1308]: 2024-12-13 01:59:25.940 [INFO][3707] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3f2b6191f84a72fdbafc6efe51af224edc248d09c2fcabeccbba94254e9cb061" Namespace="calico-system" Pod="calico-kube-controllers-6ff7c669bd-mkmgc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6ff7c669bd--mkmgc-eth0" Dec 13 01:59:25.978134 env[1308]: 2024-12-13 01:59:25.942 [INFO][3707] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3f2b6191f84a72fdbafc6efe51af224edc248d09c2fcabeccbba94254e9cb061" Namespace="calico-system" Pod="calico-kube-controllers-6ff7c669bd-mkmgc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6ff7c669bd--mkmgc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6ff7c669bd--mkmgc-eth0", GenerateName:"calico-kube-controllers-6ff7c669bd-", Namespace:"calico-system", SelfLink:"", UID:"4746d340-1c7d-4392-8db5-c68575618d26", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 59, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6ff7c669bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3f2b6191f84a72fdbafc6efe51af224edc248d09c2fcabeccbba94254e9cb061", Pod:"calico-kube-controllers-6ff7c669bd-mkmgc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali68cbb84f4c5", MAC:"b6:93:21:e1:b9:6f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:59:25.978134 env[1308]: 2024-12-13 01:59:25.974 [INFO][3707] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3f2b6191f84a72fdbafc6efe51af224edc248d09c2fcabeccbba94254e9cb061" Namespace="calico-system" Pod="calico-kube-controllers-6ff7c669bd-mkmgc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6ff7c669bd--mkmgc-eth0" Dec 13 01:59:25.982000 audit[3828]: NETFILTER_CFG table=filter:102 family=2 entries=34 op=nft_register_chain pid=3828 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 01:59:25.982000 audit[3828]: SYSCALL arch=c000003e syscall=46 success=yes exit=18640 a0=3 a1=7ffceaee3e70 a2=0 a3=7ffceaee3e5c items=0 ppid=3436 pid=3828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:25.982000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 01:59:25.986760 systemd-resolved[1225]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:59:26.005826 env[1308]: time="2024-12-13T01:59:26.005566550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z69kl,Uid:7937f569-a24d-4eec-b55c-c7674aa42251,Namespace:calico-system,Attempt:1,} returns sandbox id \"d76f24044d3ac055e86eaf60d6ca255137c59e5f97cea51f6ae25b42a366acf3\"" Dec 13 01:59:26.008301 env[1308]: time="2024-12-13T01:59:26.007935015Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 01:59:26.012362 env[1308]: time="2024-12-13T01:59:26.012274921Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:59:26.013744 env[1308]: time="2024-12-13T01:59:26.012697163Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:59:26.013744 env[1308]: time="2024-12-13T01:59:26.012720066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:59:26.013744 env[1308]: time="2024-12-13T01:59:26.013466076Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f2b6191f84a72fdbafc6efe51af224edc248d09c2fcabeccbba94254e9cb061 pid=3849 runtime=io.containerd.runc.v2 Dec 13 01:59:26.047290 systemd-resolved[1225]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:59:26.087710 env[1308]: time="2024-12-13T01:59:26.087645293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6ff7c669bd-mkmgc,Uid:4746d340-1c7d-4392-8db5-c68575618d26,Namespace:calico-system,Attempt:1,} returns sandbox id \"3f2b6191f84a72fdbafc6efe51af224edc248d09c2fcabeccbba94254e9cb061\"" Dec 13 01:59:26.186607 systemd-networkd[1081]: calie20cb9b7f2c: Link UP Dec 13 01:59:26.188067 systemd-networkd[1081]: calie20cb9b7f2c: Gained carrier Dec 13 01:59:26.188870 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calie20cb9b7f2c: link becomes ready Dec 13 01:59:26.203815 env[1308]: 2024-12-13 01:59:26.099 [INFO][3877] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--76c6f6c975--rhnlz-eth0 calico-apiserver-76c6f6c975- calico-apiserver 486a846a-be07-4723-8e84-72e633e51630 861 0 2024-12-13 01:59:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:76c6f6c975 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-76c6f6c975-rhnlz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie20cb9b7f2c [] []}} ContainerID="af6c214d6a9c2e9ec35fcfd8ee70671f89a8925aef7c40d4165aaf878cce43c7" Namespace="calico-apiserver" Pod="calico-apiserver-76c6f6c975-rhnlz" WorkloadEndpoint="localhost-k8s-calico--apiserver--76c6f6c975--rhnlz-" Dec 13 01:59:26.203815 env[1308]: 2024-12-13 01:59:26.099 [INFO][3877] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="af6c214d6a9c2e9ec35fcfd8ee70671f89a8925aef7c40d4165aaf878cce43c7" Namespace="calico-apiserver" Pod="calico-apiserver-76c6f6c975-rhnlz" WorkloadEndpoint="localhost-k8s-calico--apiserver--76c6f6c975--rhnlz-eth0" Dec 13 01:59:26.203815 env[1308]: 2024-12-13 01:59:26.136 [INFO][3897] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="af6c214d6a9c2e9ec35fcfd8ee70671f89a8925aef7c40d4165aaf878cce43c7" HandleID="k8s-pod-network.af6c214d6a9c2e9ec35fcfd8ee70671f89a8925aef7c40d4165aaf878cce43c7" Workload="localhost-k8s-calico--apiserver--76c6f6c975--rhnlz-eth0" Dec 13 01:59:26.203815 env[1308]: 2024-12-13 01:59:26.148 [INFO][3897] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="af6c214d6a9c2e9ec35fcfd8ee70671f89a8925aef7c40d4165aaf878cce43c7" HandleID="k8s-pod-network.af6c214d6a9c2e9ec35fcfd8ee70671f89a8925aef7c40d4165aaf878cce43c7" Workload="localhost-k8s-calico--apiserver--76c6f6c975--rhnlz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00061d140), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-76c6f6c975-rhnlz", "timestamp":"2024-12-13 01:59:26.136924254 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:59:26.203815 env[1308]: 2024-12-13 01:59:26.148 [INFO][3897] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:59:26.203815 env[1308]: 2024-12-13 01:59:26.148 [INFO][3897] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:59:26.203815 env[1308]: 2024-12-13 01:59:26.148 [INFO][3897] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:59:26.203815 env[1308]: 2024-12-13 01:59:26.150 [INFO][3897] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.af6c214d6a9c2e9ec35fcfd8ee70671f89a8925aef7c40d4165aaf878cce43c7" host="localhost" Dec 13 01:59:26.203815 env[1308]: 2024-12-13 01:59:26.155 [INFO][3897] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:59:26.203815 env[1308]: 2024-12-13 01:59:26.161 [INFO][3897] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:59:26.203815 env[1308]: 2024-12-13 01:59:26.163 [INFO][3897] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:59:26.203815 env[1308]: 2024-12-13 01:59:26.165 [INFO][3897] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:59:26.203815 env[1308]: 2024-12-13 01:59:26.165 [INFO][3897] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.af6c214d6a9c2e9ec35fcfd8ee70671f89a8925aef7c40d4165aaf878cce43c7" host="localhost" Dec 13 01:59:26.203815 env[1308]: 2024-12-13 01:59:26.167 [INFO][3897] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.af6c214d6a9c2e9ec35fcfd8ee70671f89a8925aef7c40d4165aaf878cce43c7 Dec 13 01:59:26.203815 env[1308]: 2024-12-13 01:59:26.173 [INFO][3897] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.af6c214d6a9c2e9ec35fcfd8ee70671f89a8925aef7c40d4165aaf878cce43c7" host="localhost" Dec 13 01:59:26.203815 env[1308]: 2024-12-13 01:59:26.181 [INFO][3897] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.af6c214d6a9c2e9ec35fcfd8ee70671f89a8925aef7c40d4165aaf878cce43c7" host="localhost" Dec 13 01:59:26.203815 env[1308]: 2024-12-13 01:59:26.181 [INFO][3897] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.af6c214d6a9c2e9ec35fcfd8ee70671f89a8925aef7c40d4165aaf878cce43c7" host="localhost" Dec 13 01:59:26.203815 env[1308]: 2024-12-13 01:59:26.181 [INFO][3897] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:59:26.203815 env[1308]: 2024-12-13 01:59:26.181 [INFO][3897] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="af6c214d6a9c2e9ec35fcfd8ee70671f89a8925aef7c40d4165aaf878cce43c7" HandleID="k8s-pod-network.af6c214d6a9c2e9ec35fcfd8ee70671f89a8925aef7c40d4165aaf878cce43c7" Workload="localhost-k8s-calico--apiserver--76c6f6c975--rhnlz-eth0" Dec 13 01:59:26.204573 env[1308]: 2024-12-13 01:59:26.184 [INFO][3877] cni-plugin/k8s.go 386: Populated endpoint ContainerID="af6c214d6a9c2e9ec35fcfd8ee70671f89a8925aef7c40d4165aaf878cce43c7" Namespace="calico-apiserver" Pod="calico-apiserver-76c6f6c975-rhnlz" WorkloadEndpoint="localhost-k8s-calico--apiserver--76c6f6c975--rhnlz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76c6f6c975--rhnlz-eth0", GenerateName:"calico-apiserver-76c6f6c975-", Namespace:"calico-apiserver", SelfLink:"", UID:"486a846a-be07-4723-8e84-72e633e51630", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 59, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76c6f6c975", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-76c6f6c975-rhnlz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie20cb9b7f2c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:59:26.204573 env[1308]: 2024-12-13 01:59:26.184 [INFO][3877] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="af6c214d6a9c2e9ec35fcfd8ee70671f89a8925aef7c40d4165aaf878cce43c7" Namespace="calico-apiserver" Pod="calico-apiserver-76c6f6c975-rhnlz" WorkloadEndpoint="localhost-k8s-calico--apiserver--76c6f6c975--rhnlz-eth0" Dec 13 01:59:26.204573 env[1308]: 2024-12-13 01:59:26.184 [INFO][3877] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie20cb9b7f2c ContainerID="af6c214d6a9c2e9ec35fcfd8ee70671f89a8925aef7c40d4165aaf878cce43c7" Namespace="calico-apiserver" Pod="calico-apiserver-76c6f6c975-rhnlz" WorkloadEndpoint="localhost-k8s-calico--apiserver--76c6f6c975--rhnlz-eth0" Dec 13 01:59:26.204573 env[1308]: 2024-12-13 01:59:26.188 [INFO][3877] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="af6c214d6a9c2e9ec35fcfd8ee70671f89a8925aef7c40d4165aaf878cce43c7" Namespace="calico-apiserver" Pod="calico-apiserver-76c6f6c975-rhnlz" WorkloadEndpoint="localhost-k8s-calico--apiserver--76c6f6c975--rhnlz-eth0" Dec 13 01:59:26.204573 env[1308]: 2024-12-13 01:59:26.188 [INFO][3877] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="af6c214d6a9c2e9ec35fcfd8ee70671f89a8925aef7c40d4165aaf878cce43c7" Namespace="calico-apiserver" Pod="calico-apiserver-76c6f6c975-rhnlz" WorkloadEndpoint="localhost-k8s-calico--apiserver--76c6f6c975--rhnlz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76c6f6c975--rhnlz-eth0", GenerateName:"calico-apiserver-76c6f6c975-", Namespace:"calico-apiserver", SelfLink:"", UID:"486a846a-be07-4723-8e84-72e633e51630", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 59, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76c6f6c975", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"af6c214d6a9c2e9ec35fcfd8ee70671f89a8925aef7c40d4165aaf878cce43c7", Pod:"calico-apiserver-76c6f6c975-rhnlz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie20cb9b7f2c", MAC:"be:3c:d0:00:2e:1d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:59:26.204573 env[1308]: 2024-12-13 01:59:26.201 [INFO][3877] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="af6c214d6a9c2e9ec35fcfd8ee70671f89a8925aef7c40d4165aaf878cce43c7" Namespace="calico-apiserver" Pod="calico-apiserver-76c6f6c975-rhnlz" WorkloadEndpoint="localhost-k8s-calico--apiserver--76c6f6c975--rhnlz-eth0" Dec 13 01:59:26.211000 audit[3919]: NETFILTER_CFG table=filter:103 family=2 entries=48 op=nft_register_chain pid=3919 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 01:59:26.211000 audit[3919]: SYSCALL arch=c000003e syscall=46 success=yes exit=25868 a0=3 a1=7ffc5e5d3020 a2=0 a3=7ffc5e5d300c items=0 ppid=3436 pid=3919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:26.211000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 01:59:26.222595 env[1308]: time="2024-12-13T01:59:26.222479261Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:59:26.222595 env[1308]: time="2024-12-13T01:59:26.222556326Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:59:26.222860 env[1308]: time="2024-12-13T01:59:26.222570903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:59:26.222967 env[1308]: time="2024-12-13T01:59:26.222918807Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/af6c214d6a9c2e9ec35fcfd8ee70671f89a8925aef7c40d4165aaf878cce43c7 pid=3927 runtime=io.containerd.runc.v2 Dec 13 01:59:26.250844 systemd-resolved[1225]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:59:26.280041 env[1308]: time="2024-12-13T01:59:26.279991336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76c6f6c975-rhnlz,Uid:486a846a-be07-4723-8e84-72e633e51630,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"af6c214d6a9c2e9ec35fcfd8ee70671f89a8925aef7c40d4165aaf878cce43c7\"" Dec 13 01:59:26.765801 env[1308]: time="2024-12-13T01:59:26.765744502Z" level=info msg="StopPodSandbox for \"f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c\"" Dec 13 01:59:26.990435 env[1308]: 2024-12-13 01:59:26.856 [INFO][3977] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c" Dec 13 01:59:26.990435 env[1308]: 2024-12-13 01:59:26.857 [INFO][3977] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c" iface="eth0" netns="/var/run/netns/cni-7bac396d-5ae1-e7d6-83dd-39b57667f1c0" Dec 13 01:59:26.990435 env[1308]: 2024-12-13 01:59:26.857 [INFO][3977] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c" iface="eth0" netns="/var/run/netns/cni-7bac396d-5ae1-e7d6-83dd-39b57667f1c0" Dec 13 01:59:26.990435 env[1308]: 2024-12-13 01:59:26.857 [INFO][3977] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c" iface="eth0" netns="/var/run/netns/cni-7bac396d-5ae1-e7d6-83dd-39b57667f1c0" Dec 13 01:59:26.990435 env[1308]: 2024-12-13 01:59:26.857 [INFO][3977] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c" Dec 13 01:59:26.990435 env[1308]: 2024-12-13 01:59:26.857 [INFO][3977] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c" Dec 13 01:59:26.990435 env[1308]: 2024-12-13 01:59:26.877 [INFO][3984] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c" HandleID="k8s-pod-network.f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c" Workload="localhost-k8s-coredns--76f75df574--dfz7m-eth0" Dec 13 01:59:26.990435 env[1308]: 2024-12-13 01:59:26.877 [INFO][3984] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:59:26.990435 env[1308]: 2024-12-13 01:59:26.878 [INFO][3984] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:59:26.990435 env[1308]: 2024-12-13 01:59:26.985 [WARNING][3984] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c" HandleID="k8s-pod-network.f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c" Workload="localhost-k8s-coredns--76f75df574--dfz7m-eth0" Dec 13 01:59:26.990435 env[1308]: 2024-12-13 01:59:26.985 [INFO][3984] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c" HandleID="k8s-pod-network.f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c" Workload="localhost-k8s-coredns--76f75df574--dfz7m-eth0" Dec 13 01:59:26.990435 env[1308]: 2024-12-13 01:59:26.986 [INFO][3984] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:59:26.990435 env[1308]: 2024-12-13 01:59:26.988 [INFO][3977] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c" Dec 13 01:59:26.991254 env[1308]: time="2024-12-13T01:59:26.990605933Z" level=info msg="TearDown network for sandbox \"f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c\" successfully" Dec 13 01:59:26.991254 env[1308]: time="2024-12-13T01:59:26.990634767Z" level=info msg="StopPodSandbox for \"f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c\" returns successfully" Dec 13 01:59:26.991323 kubelet[2213]: E1213 01:59:26.990940 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:59:26.992549 env[1308]: time="2024-12-13T01:59:26.992511920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-dfz7m,Uid:d5f89f4f-4968-46b5-9f7b-bc1d64f6aef1,Namespace:kube-system,Attempt:1,}" Dec 13 01:59:26.993620 systemd[1]: run-netns-cni\x2d7bac396d\x2d5ae1\x2de7d6\x2d83dd\x2d39b57667f1c0.mount: Deactivated successfully. Dec 13 01:59:27.105826 systemd-networkd[1081]: cali16bf58cb103: Link UP Dec 13 01:59:27.108415 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 01:59:27.108483 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali16bf58cb103: link becomes ready Dec 13 01:59:27.108660 systemd-networkd[1081]: cali16bf58cb103: Gained carrier Dec 13 01:59:27.119387 env[1308]: 2024-12-13 01:59:27.037 [INFO][3992] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--dfz7m-eth0 coredns-76f75df574- kube-system d5f89f4f-4968-46b5-9f7b-bc1d64f6aef1 878 0 2024-12-13 01:58:53 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-dfz7m eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali16bf58cb103 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="bc53cd1418ce2cf4d0f18fee993988babd90657c4cc3e1aa14ea73ed2ddc989d" Namespace="kube-system" Pod="coredns-76f75df574-dfz7m" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--dfz7m-" Dec 13 01:59:27.119387 env[1308]: 2024-12-13 01:59:27.037 [INFO][3992] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bc53cd1418ce2cf4d0f18fee993988babd90657c4cc3e1aa14ea73ed2ddc989d" Namespace="kube-system" Pod="coredns-76f75df574-dfz7m" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--dfz7m-eth0" Dec 13 01:59:27.119387 env[1308]: 2024-12-13 01:59:27.066 [INFO][4006] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bc53cd1418ce2cf4d0f18fee993988babd90657c4cc3e1aa14ea73ed2ddc989d" HandleID="k8s-pod-network.bc53cd1418ce2cf4d0f18fee993988babd90657c4cc3e1aa14ea73ed2ddc989d" Workload="localhost-k8s-coredns--76f75df574--dfz7m-eth0" Dec 13 01:59:27.119387 env[1308]: 2024-12-13 01:59:27.075 [INFO][4006] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bc53cd1418ce2cf4d0f18fee993988babd90657c4cc3e1aa14ea73ed2ddc989d" HandleID="k8s-pod-network.bc53cd1418ce2cf4d0f18fee993988babd90657c4cc3e1aa14ea73ed2ddc989d" Workload="localhost-k8s-coredns--76f75df574--dfz7m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027cc60), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-dfz7m", "timestamp":"2024-12-13 01:59:27.066142971 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:59:27.119387 env[1308]: 2024-12-13 01:59:27.075 [INFO][4006] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:59:27.119387 env[1308]: 2024-12-13 01:59:27.075 [INFO][4006] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:59:27.119387 env[1308]: 2024-12-13 01:59:27.075 [INFO][4006] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:59:27.119387 env[1308]: 2024-12-13 01:59:27.076 [INFO][4006] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bc53cd1418ce2cf4d0f18fee993988babd90657c4cc3e1aa14ea73ed2ddc989d" host="localhost" Dec 13 01:59:27.119387 env[1308]: 2024-12-13 01:59:27.079 [INFO][4006] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:59:27.119387 env[1308]: 2024-12-13 01:59:27.084 [INFO][4006] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:59:27.119387 env[1308]: 2024-12-13 01:59:27.089 [INFO][4006] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:59:27.119387 env[1308]: 2024-12-13 01:59:27.091 [INFO][4006] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:59:27.119387 env[1308]: 2024-12-13 01:59:27.091 [INFO][4006] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bc53cd1418ce2cf4d0f18fee993988babd90657c4cc3e1aa14ea73ed2ddc989d" host="localhost" Dec 13 01:59:27.119387 env[1308]: 2024-12-13 01:59:27.092 [INFO][4006] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bc53cd1418ce2cf4d0f18fee993988babd90657c4cc3e1aa14ea73ed2ddc989d Dec 13 01:59:27.119387 env[1308]: 2024-12-13 01:59:27.096 [INFO][4006] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bc53cd1418ce2cf4d0f18fee993988babd90657c4cc3e1aa14ea73ed2ddc989d" host="localhost" Dec 13 01:59:27.119387 env[1308]: 2024-12-13 01:59:27.101 [INFO][4006] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.bc53cd1418ce2cf4d0f18fee993988babd90657c4cc3e1aa14ea73ed2ddc989d" host="localhost" Dec 13 01:59:27.119387 env[1308]: 2024-12-13 01:59:27.101 [INFO][4006] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.bc53cd1418ce2cf4d0f18fee993988babd90657c4cc3e1aa14ea73ed2ddc989d" host="localhost" Dec 13 01:59:27.119387 env[1308]: 2024-12-13 01:59:27.101 [INFO][4006] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:59:27.119387 env[1308]: 2024-12-13 01:59:27.101 [INFO][4006] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="bc53cd1418ce2cf4d0f18fee993988babd90657c4cc3e1aa14ea73ed2ddc989d" HandleID="k8s-pod-network.bc53cd1418ce2cf4d0f18fee993988babd90657c4cc3e1aa14ea73ed2ddc989d" Workload="localhost-k8s-coredns--76f75df574--dfz7m-eth0" Dec 13 01:59:27.120244 env[1308]: 2024-12-13 01:59:27.103 [INFO][3992] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bc53cd1418ce2cf4d0f18fee993988babd90657c4cc3e1aa14ea73ed2ddc989d" Namespace="kube-system" Pod="coredns-76f75df574-dfz7m" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--dfz7m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--dfz7m-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"d5f89f4f-4968-46b5-9f7b-bc1d64f6aef1", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 58, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-dfz7m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali16bf58cb103", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:59:27.120244 env[1308]: 2024-12-13 01:59:27.104 [INFO][3992] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="bc53cd1418ce2cf4d0f18fee993988babd90657c4cc3e1aa14ea73ed2ddc989d" Namespace="kube-system" Pod="coredns-76f75df574-dfz7m" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--dfz7m-eth0" Dec 13 01:59:27.120244 env[1308]: 2024-12-13 01:59:27.104 [INFO][3992] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali16bf58cb103 ContainerID="bc53cd1418ce2cf4d0f18fee993988babd90657c4cc3e1aa14ea73ed2ddc989d" Namespace="kube-system" Pod="coredns-76f75df574-dfz7m" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--dfz7m-eth0" Dec 13 01:59:27.120244 env[1308]: 2024-12-13 01:59:27.108 [INFO][3992] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bc53cd1418ce2cf4d0f18fee993988babd90657c4cc3e1aa14ea73ed2ddc989d" Namespace="kube-system" Pod="coredns-76f75df574-dfz7m" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--dfz7m-eth0" Dec 13 01:59:27.120244 env[1308]: 2024-12-13 01:59:27.108 [INFO][3992] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bc53cd1418ce2cf4d0f18fee993988babd90657c4cc3e1aa14ea73ed2ddc989d" Namespace="kube-system" Pod="coredns-76f75df574-dfz7m" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--dfz7m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--dfz7m-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"d5f89f4f-4968-46b5-9f7b-bc1d64f6aef1", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 58, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bc53cd1418ce2cf4d0f18fee993988babd90657c4cc3e1aa14ea73ed2ddc989d", Pod:"coredns-76f75df574-dfz7m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali16bf58cb103", MAC:"32:70:e0:80:1d:c3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:59:27.120244 env[1308]: 2024-12-13 01:59:27.117 [INFO][3992] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bc53cd1418ce2cf4d0f18fee993988babd90657c4cc3e1aa14ea73ed2ddc989d" Namespace="kube-system" Pod="coredns-76f75df574-dfz7m" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--dfz7m-eth0" Dec 13 01:59:27.128000 audit[4027]: NETFILTER_CFG table=filter:104 family=2 entries=46 op=nft_register_chain pid=4027 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 01:59:27.128000 audit[4027]: SYSCALL arch=c000003e syscall=46 success=yes exit=22712 a0=3 a1=7ffc94ef82c0 a2=0 a3=7ffc94ef82ac items=0 ppid=3436 pid=4027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:27.128000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 01:59:27.134070 env[1308]: time="2024-12-13T01:59:27.133993704Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:59:27.134070 env[1308]: time="2024-12-13T01:59:27.134029370Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:59:27.134070 env[1308]: time="2024-12-13T01:59:27.134039539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:59:27.134299 env[1308]: time="2024-12-13T01:59:27.134147923Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bc53cd1418ce2cf4d0f18fee993988babd90657c4cc3e1aa14ea73ed2ddc989d pid=4034 runtime=io.containerd.runc.v2 Dec 13 01:59:27.156304 systemd-resolved[1225]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:59:27.179074 env[1308]: time="2024-12-13T01:59:27.178443116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-dfz7m,Uid:d5f89f4f-4968-46b5-9f7b-bc1d64f6aef1,Namespace:kube-system,Attempt:1,} returns sandbox id \"bc53cd1418ce2cf4d0f18fee993988babd90657c4cc3e1aa14ea73ed2ddc989d\"" Dec 13 01:59:27.179251 kubelet[2213]: E1213 01:59:27.179170 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:59:27.181613 env[1308]: time="2024-12-13T01:59:27.181510183Z" level=info msg="CreateContainer within sandbox \"bc53cd1418ce2cf4d0f18fee993988babd90657c4cc3e1aa14ea73ed2ddc989d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:59:27.199888 env[1308]: time="2024-12-13T01:59:27.199815167Z" level=info msg="CreateContainer within sandbox \"bc53cd1418ce2cf4d0f18fee993988babd90657c4cc3e1aa14ea73ed2ddc989d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e6b28e09aa320a01167ad01cb4aa6be8c4d6ad3916783d64464fd70eea8ad782\"" Dec 13 01:59:27.201874 env[1308]: time="2024-12-13T01:59:27.200938666Z" level=info msg="StartContainer for \"e6b28e09aa320a01167ad01cb4aa6be8c4d6ad3916783d64464fd70eea8ad782\"" Dec 13 01:59:27.245866 kubelet[2213]: I1213 01:59:27.245809 2213 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:59:27.246582 kubelet[2213]: E1213 01:59:27.246561 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:59:27.254508 env[1308]: time="2024-12-13T01:59:27.254462479Z" level=info msg="StartContainer for \"e6b28e09aa320a01167ad01cb4aa6be8c4d6ad3916783d64464fd70eea8ad782\" returns successfully" Dec 13 01:59:27.483036 systemd-networkd[1081]: cali26be98e69f9: Gained IPv6LL Dec 13 01:59:27.610891 systemd-networkd[1081]: cali68cbb84f4c5: Gained IPv6LL Dec 13 01:59:27.611188 systemd-networkd[1081]: calie20cb9b7f2c: Gained IPv6LL Dec 13 01:59:27.767172 env[1308]: time="2024-12-13T01:59:27.767021588Z" level=info msg="StopPodSandbox for \"c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05\"" Dec 13 01:59:27.884825 kubelet[2213]: E1213 01:59:27.884792 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:59:27.885149 kubelet[2213]: E1213 01:59:27.885132 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:59:28.069754 kubelet[2213]: I1213 01:59:28.069619 2213 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-dfz7m" podStartSLOduration=35.069568501 podStartE2EDuration="35.069568501s" podCreationTimestamp="2024-12-13 01:58:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:59:28.069282915 +0000 UTC m=+48.437631162" watchObservedRunningTime="2024-12-13 01:59:28.069568501 +0000 UTC m=+48.437916748" Dec 13 01:59:28.294165 env[1308]: 2024-12-13 01:59:28.117 [INFO][4163] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05" Dec 13 01:59:28.294165 env[1308]: 2024-12-13 01:59:28.117 [INFO][4163] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05" iface="eth0" netns="/var/run/netns/cni-7f5f6351-74ae-5410-27a0-5a20524313e7" Dec 13 01:59:28.294165 env[1308]: 2024-12-13 01:59:28.117 [INFO][4163] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05" iface="eth0" netns="/var/run/netns/cni-7f5f6351-74ae-5410-27a0-5a20524313e7" Dec 13 01:59:28.294165 env[1308]: 2024-12-13 01:59:28.117 [INFO][4163] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05" iface="eth0" netns="/var/run/netns/cni-7f5f6351-74ae-5410-27a0-5a20524313e7" Dec 13 01:59:28.294165 env[1308]: 2024-12-13 01:59:28.117 [INFO][4163] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05" Dec 13 01:59:28.294165 env[1308]: 2024-12-13 01:59:28.117 [INFO][4163] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05" Dec 13 01:59:28.294165 env[1308]: 2024-12-13 01:59:28.141 [INFO][4172] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05" HandleID="k8s-pod-network.c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05" Workload="localhost-k8s-calico--apiserver--76c6f6c975--mmn2h-eth0" Dec 13 01:59:28.294165 env[1308]: 2024-12-13 01:59:28.141 [INFO][4172] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:59:28.294165 env[1308]: 2024-12-13 01:59:28.141 [INFO][4172] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:59:28.294165 env[1308]: 2024-12-13 01:59:28.289 [WARNING][4172] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05" HandleID="k8s-pod-network.c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05" Workload="localhost-k8s-calico--apiserver--76c6f6c975--mmn2h-eth0" Dec 13 01:59:28.294165 env[1308]: 2024-12-13 01:59:28.289 [INFO][4172] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05" HandleID="k8s-pod-network.c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05" Workload="localhost-k8s-calico--apiserver--76c6f6c975--mmn2h-eth0" Dec 13 01:59:28.294165 env[1308]: 2024-12-13 01:59:28.291 [INFO][4172] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:59:28.294165 env[1308]: 2024-12-13 01:59:28.292 [INFO][4163] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05" Dec 13 01:59:28.294855 env[1308]: time="2024-12-13T01:59:28.294316207Z" level=info msg="TearDown network for sandbox \"c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05\" successfully" Dec 13 01:59:28.294855 env[1308]: time="2024-12-13T01:59:28.294360631Z" level=info msg="StopPodSandbox for \"c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05\" returns successfully" Dec 13 01:59:28.294978 env[1308]: time="2024-12-13T01:59:28.294945538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76c6f6c975-mmn2h,Uid:55f83b0b-5364-4958-b303-1b06d5dd6c20,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:59:28.296698 systemd[1]: run-netns-cni\x2d7f5f6351\x2d74ae\x2d5410\x2d27a0\x2d5a20524313e7.mount: Deactivated successfully. Dec 13 01:59:28.309000 audit[4186]: NETFILTER_CFG table=filter:105 family=2 entries=16 op=nft_register_rule pid=4186 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:59:28.309000 audit[4186]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffcf505ed40 a2=0 a3=7ffcf505ed2c items=0 ppid=2382 pid=4186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:28.309000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:59:28.327000 audit[4186]: NETFILTER_CFG table=nat:106 family=2 entries=14 op=nft_register_rule pid=4186 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:59:28.327000 audit[4186]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7ffcf505ed40 a2=0 a3=0 items=0 ppid=2382 pid=4186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:28.327000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:59:28.424545 env[1308]: time="2024-12-13T01:59:28.424477070Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:59:28.433918 env[1308]: time="2024-12-13T01:59:28.433847075Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:59:28.436953 env[1308]: time="2024-12-13T01:59:28.436879195Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:59:28.440860 env[1308]: time="2024-12-13T01:59:28.440831753Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:59:28.441103 env[1308]: time="2024-12-13T01:59:28.441080591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Dec 13 01:59:28.442533 env[1308]: time="2024-12-13T01:59:28.442507368Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 01:59:28.443566 env[1308]: time="2024-12-13T01:59:28.443522012Z" level=info msg="CreateContainer within sandbox \"d76f24044d3ac055e86eaf60d6ca255137c59e5f97cea51f6ae25b42a366acf3\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 01:59:28.466328 env[1308]: time="2024-12-13T01:59:28.466279742Z" level=info msg="CreateContainer within sandbox \"d76f24044d3ac055e86eaf60d6ca255137c59e5f97cea51f6ae25b42a366acf3\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"4e41e262c14baa3d5905f025821784f4fe99bc2f7f3df1b9ed4f890204dc8b54\"" Dec 13 01:59:28.467127 env[1308]: time="2024-12-13T01:59:28.467071307Z" level=info msg="StartContainer for \"4e41e262c14baa3d5905f025821784f4fe99bc2f7f3df1b9ed4f890204dc8b54\"" Dec 13 01:59:28.507079 systemd-networkd[1081]: cali16bf58cb103: Gained IPv6LL Dec 13 01:59:28.531286 env[1308]: time="2024-12-13T01:59:28.531222561Z" level=info msg="StartContainer for \"4e41e262c14baa3d5905f025821784f4fe99bc2f7f3df1b9ed4f890204dc8b54\" returns successfully" Dec 13 01:59:28.568899 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 01:59:28.569003 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali05712ea44be: link becomes ready Dec 13 01:59:28.567991 systemd-networkd[1081]: cali05712ea44be: Link UP Dec 13 01:59:28.569431 systemd-networkd[1081]: cali05712ea44be: Gained carrier Dec 13 01:59:28.582191 env[1308]: 2024-12-13 01:59:28.479 [INFO][4187] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--76c6f6c975--mmn2h-eth0 calico-apiserver-76c6f6c975- calico-apiserver 55f83b0b-5364-4958-b303-1b06d5dd6c20 903 0 2024-12-13 01:59:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:76c6f6c975 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-76c6f6c975-mmn2h eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali05712ea44be [] []}} ContainerID="2566c02b8a612864d3a26278d529654f7c4da3b2e0b855a99e8b790b608d2cd7" Namespace="calico-apiserver" Pod="calico-apiserver-76c6f6c975-mmn2h" WorkloadEndpoint="localhost-k8s-calico--apiserver--76c6f6c975--mmn2h-" Dec 13 01:59:28.582191 env[1308]: 2024-12-13 01:59:28.479 [INFO][4187] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2566c02b8a612864d3a26278d529654f7c4da3b2e0b855a99e8b790b608d2cd7" Namespace="calico-apiserver" Pod="calico-apiserver-76c6f6c975-mmn2h" WorkloadEndpoint="localhost-k8s-calico--apiserver--76c6f6c975--mmn2h-eth0" Dec 13 01:59:28.582191 env[1308]: 2024-12-13 01:59:28.518 [INFO][4220] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2566c02b8a612864d3a26278d529654f7c4da3b2e0b855a99e8b790b608d2cd7" HandleID="k8s-pod-network.2566c02b8a612864d3a26278d529654f7c4da3b2e0b855a99e8b790b608d2cd7" Workload="localhost-k8s-calico--apiserver--76c6f6c975--mmn2h-eth0" Dec 13 01:59:28.582191 env[1308]: 2024-12-13 01:59:28.533 [INFO][4220] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2566c02b8a612864d3a26278d529654f7c4da3b2e0b855a99e8b790b608d2cd7" HandleID="k8s-pod-network.2566c02b8a612864d3a26278d529654f7c4da3b2e0b855a99e8b790b608d2cd7" Workload="localhost-k8s-calico--apiserver--76c6f6c975--mmn2h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000376e90), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-76c6f6c975-mmn2h", "timestamp":"2024-12-13 01:59:28.518866882 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:59:28.582191 env[1308]: 2024-12-13 01:59:28.533 [INFO][4220] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:59:28.582191 env[1308]: 2024-12-13 01:59:28.533 [INFO][4220] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:59:28.582191 env[1308]: 2024-12-13 01:59:28.533 [INFO][4220] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:59:28.582191 env[1308]: 2024-12-13 01:59:28.536 [INFO][4220] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2566c02b8a612864d3a26278d529654f7c4da3b2e0b855a99e8b790b608d2cd7" host="localhost" Dec 13 01:59:28.582191 env[1308]: 2024-12-13 01:59:28.540 [INFO][4220] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:59:28.582191 env[1308]: 2024-12-13 01:59:28.547 [INFO][4220] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:59:28.582191 env[1308]: 2024-12-13 01:59:28.548 [INFO][4220] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:59:28.582191 env[1308]: 2024-12-13 01:59:28.550 [INFO][4220] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:59:28.582191 env[1308]: 2024-12-13 01:59:28.550 [INFO][4220] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2566c02b8a612864d3a26278d529654f7c4da3b2e0b855a99e8b790b608d2cd7" host="localhost" Dec 13 01:59:28.582191 env[1308]: 2024-12-13 01:59:28.552 [INFO][4220] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2566c02b8a612864d3a26278d529654f7c4da3b2e0b855a99e8b790b608d2cd7 Dec 13 01:59:28.582191 env[1308]: 2024-12-13 01:59:28.556 [INFO][4220] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2566c02b8a612864d3a26278d529654f7c4da3b2e0b855a99e8b790b608d2cd7" host="localhost" Dec 13 01:59:28.582191 env[1308]: 2024-12-13 01:59:28.561 [INFO][4220] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.2566c02b8a612864d3a26278d529654f7c4da3b2e0b855a99e8b790b608d2cd7" host="localhost" Dec 13 01:59:28.582191 env[1308]: 2024-12-13 01:59:28.561 [INFO][4220] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.2566c02b8a612864d3a26278d529654f7c4da3b2e0b855a99e8b790b608d2cd7" host="localhost" Dec 13 01:59:28.582191 env[1308]: 2024-12-13 01:59:28.561 [INFO][4220] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:59:28.582191 env[1308]: 2024-12-13 01:59:28.561 [INFO][4220] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="2566c02b8a612864d3a26278d529654f7c4da3b2e0b855a99e8b790b608d2cd7" HandleID="k8s-pod-network.2566c02b8a612864d3a26278d529654f7c4da3b2e0b855a99e8b790b608d2cd7" Workload="localhost-k8s-calico--apiserver--76c6f6c975--mmn2h-eth0" Dec 13 01:59:28.583058 env[1308]: 2024-12-13 01:59:28.564 [INFO][4187] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2566c02b8a612864d3a26278d529654f7c4da3b2e0b855a99e8b790b608d2cd7" Namespace="calico-apiserver" Pod="calico-apiserver-76c6f6c975-mmn2h" WorkloadEndpoint="localhost-k8s-calico--apiserver--76c6f6c975--mmn2h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76c6f6c975--mmn2h-eth0", GenerateName:"calico-apiserver-76c6f6c975-", Namespace:"calico-apiserver", SelfLink:"", UID:"55f83b0b-5364-4958-b303-1b06d5dd6c20", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 59, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76c6f6c975", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-76c6f6c975-mmn2h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali05712ea44be", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:59:28.583058 env[1308]: 2024-12-13 01:59:28.564 [INFO][4187] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="2566c02b8a612864d3a26278d529654f7c4da3b2e0b855a99e8b790b608d2cd7" Namespace="calico-apiserver" Pod="calico-apiserver-76c6f6c975-mmn2h" WorkloadEndpoint="localhost-k8s-calico--apiserver--76c6f6c975--mmn2h-eth0" Dec 13 01:59:28.583058 env[1308]: 2024-12-13 01:59:28.564 [INFO][4187] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali05712ea44be ContainerID="2566c02b8a612864d3a26278d529654f7c4da3b2e0b855a99e8b790b608d2cd7" Namespace="calico-apiserver" Pod="calico-apiserver-76c6f6c975-mmn2h" WorkloadEndpoint="localhost-k8s-calico--apiserver--76c6f6c975--mmn2h-eth0" Dec 13 01:59:28.583058 env[1308]: 2024-12-13 01:59:28.568 [INFO][4187] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2566c02b8a612864d3a26278d529654f7c4da3b2e0b855a99e8b790b608d2cd7" Namespace="calico-apiserver" Pod="calico-apiserver-76c6f6c975-mmn2h" WorkloadEndpoint="localhost-k8s-calico--apiserver--76c6f6c975--mmn2h-eth0" Dec 13 01:59:28.583058 env[1308]: 2024-12-13 01:59:28.570 [INFO][4187] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2566c02b8a612864d3a26278d529654f7c4da3b2e0b855a99e8b790b608d2cd7" Namespace="calico-apiserver" Pod="calico-apiserver-76c6f6c975-mmn2h" WorkloadEndpoint="localhost-k8s-calico--apiserver--76c6f6c975--mmn2h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76c6f6c975--mmn2h-eth0", GenerateName:"calico-apiserver-76c6f6c975-", Namespace:"calico-apiserver", SelfLink:"", UID:"55f83b0b-5364-4958-b303-1b06d5dd6c20", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 59, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76c6f6c975", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2566c02b8a612864d3a26278d529654f7c4da3b2e0b855a99e8b790b608d2cd7", Pod:"calico-apiserver-76c6f6c975-mmn2h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali05712ea44be", MAC:"36:95:96:24:24:1e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:59:28.583058 env[1308]: 2024-12-13 01:59:28.580 [INFO][4187] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2566c02b8a612864d3a26278d529654f7c4da3b2e0b855a99e8b790b608d2cd7" Namespace="calico-apiserver" Pod="calico-apiserver-76c6f6c975-mmn2h" WorkloadEndpoint="localhost-k8s-calico--apiserver--76c6f6c975--mmn2h-eth0" Dec 13 01:59:28.591000 audit[4258]: NETFILTER_CFG table=filter:107 family=2 entries=46 op=nft_register_chain pid=4258 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 01:59:28.591000 audit[4258]: SYSCALL arch=c000003e syscall=46 success=yes exit=23892 a0=3 a1=7ffdb5031d30 a2=0 a3=7ffdb5031d1c items=0 ppid=3436 pid=4258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:28.591000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 01:59:28.595999 env[1308]: time="2024-12-13T01:59:28.595942971Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:59:28.596106 env[1308]: time="2024-12-13T01:59:28.595979861Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:59:28.596106 env[1308]: time="2024-12-13T01:59:28.595989879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:59:28.596203 env[1308]: time="2024-12-13T01:59:28.596162524Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2566c02b8a612864d3a26278d529654f7c4da3b2e0b855a99e8b790b608d2cd7 pid=4265 runtime=io.containerd.runc.v2 Dec 13 01:59:28.618503 systemd-resolved[1225]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:59:28.641497 env[1308]: time="2024-12-13T01:59:28.641459814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76c6f6c975-mmn2h,Uid:55f83b0b-5364-4958-b303-1b06d5dd6c20,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"2566c02b8a612864d3a26278d529654f7c4da3b2e0b855a99e8b790b608d2cd7\"" Dec 13 01:59:28.765783 env[1308]: time="2024-12-13T01:59:28.765705657Z" level=info msg="StopPodSandbox for \"979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28\"" Dec 13 01:59:28.889481 kubelet[2213]: E1213 01:59:28.889448 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:59:29.189000 audit[4331]: NETFILTER_CFG table=filter:108 family=2 entries=13 op=nft_register_rule pid=4331 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:59:29.191375 kernel: kauditd_printk_skb: 22 callbacks suppressed Dec 13 01:59:29.191447 kernel: audit: type=1325 audit(1734055169.189:424): table=filter:108 family=2 entries=13 op=nft_register_rule pid=4331 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:59:29.201696 kernel: audit: type=1300 audit(1734055169.189:424): arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffe65834980 a2=0 a3=7ffe6583496c items=0 ppid=2382 pid=4331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:29.201861 kernel: audit: type=1327 audit(1734055169.189:424): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:59:29.189000 audit[4331]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffe65834980 a2=0 a3=7ffe6583496c items=0 ppid=2382 pid=4331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:29.189000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:59:29.203000 audit[4331]: NETFILTER_CFG table=nat:109 family=2 entries=35 op=nft_register_chain pid=4331 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:59:29.203000 audit[4331]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffe65834980 a2=0 a3=7ffe6583496c items=0 ppid=2382 pid=4331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:29.211523 kernel: audit: type=1325 audit(1734055169.203:425): table=nat:109 family=2 entries=35 op=nft_register_chain pid=4331 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:59:29.211586 kernel: audit: type=1300 audit(1734055169.203:425): arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffe65834980 a2=0 a3=7ffe6583496c items=0 ppid=2382 pid=4331 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:29.203000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:59:29.214031 kernel: audit: type=1327 audit(1734055169.203:425): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:59:29.217221 env[1308]: 2024-12-13 01:59:29.167 [INFO][4316] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28" Dec 13 01:59:29.217221 env[1308]: 2024-12-13 01:59:29.168 [INFO][4316] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28" iface="eth0" netns="/var/run/netns/cni-cb15205c-419b-0cac-2bb0-b53d0d749770" Dec 13 01:59:29.217221 env[1308]: 2024-12-13 01:59:29.168 [INFO][4316] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28" iface="eth0" netns="/var/run/netns/cni-cb15205c-419b-0cac-2bb0-b53d0d749770" Dec 13 01:59:29.217221 env[1308]: 2024-12-13 01:59:29.168 [INFO][4316] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28" iface="eth0" netns="/var/run/netns/cni-cb15205c-419b-0cac-2bb0-b53d0d749770" Dec 13 01:59:29.217221 env[1308]: 2024-12-13 01:59:29.168 [INFO][4316] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28" Dec 13 01:59:29.217221 env[1308]: 2024-12-13 01:59:29.168 [INFO][4316] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28" Dec 13 01:59:29.217221 env[1308]: 2024-12-13 01:59:29.205 [INFO][4323] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28" HandleID="k8s-pod-network.979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28" Workload="localhost-k8s-coredns--76f75df574--jqxcg-eth0" Dec 13 01:59:29.217221 env[1308]: 2024-12-13 01:59:29.206 [INFO][4323] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:59:29.217221 env[1308]: 2024-12-13 01:59:29.206 [INFO][4323] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:59:29.217221 env[1308]: 2024-12-13 01:59:29.211 [WARNING][4323] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28" HandleID="k8s-pod-network.979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28" Workload="localhost-k8s-coredns--76f75df574--jqxcg-eth0" Dec 13 01:59:29.217221 env[1308]: 2024-12-13 01:59:29.211 [INFO][4323] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28" HandleID="k8s-pod-network.979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28" Workload="localhost-k8s-coredns--76f75df574--jqxcg-eth0" Dec 13 01:59:29.217221 env[1308]: 2024-12-13 01:59:29.213 [INFO][4323] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:59:29.217221 env[1308]: 2024-12-13 01:59:29.215 [INFO][4316] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28" Dec 13 01:59:29.217645 env[1308]: time="2024-12-13T01:59:29.217454438Z" level=info msg="TearDown network for sandbox \"979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28\" successfully" Dec 13 01:59:29.217645 env[1308]: time="2024-12-13T01:59:29.217485686Z" level=info msg="StopPodSandbox for \"979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28\" returns successfully" Dec 13 01:59:29.217784 kubelet[2213]: E1213 01:59:29.217749 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:59:29.218796 env[1308]: time="2024-12-13T01:59:29.218430600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-jqxcg,Uid:0356ebac-7712-4e16-9963-c87ca7672297,Namespace:kube-system,Attempt:1,}" Dec 13 01:59:29.219957 systemd[1]: run-netns-cni\x2dcb15205c\x2d419b\x2d0cac\x2d2bb0\x2db53d0d749770.mount: Deactivated successfully. Dec 13 01:59:29.330116 systemd-networkd[1081]: cali2783d35463a: Link UP Dec 13 01:59:29.332473 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali2783d35463a: link becomes ready Dec 13 01:59:29.331840 systemd-networkd[1081]: cali2783d35463a: Gained carrier Dec 13 01:59:29.343558 env[1308]: 2024-12-13 01:59:29.264 [INFO][4334] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--jqxcg-eth0 coredns-76f75df574- kube-system 0356ebac-7712-4e16-9963-c87ca7672297 918 0 2024-12-13 01:58:53 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-jqxcg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2783d35463a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="01f838c17445c3f330ad174bea58a82872c28b457322d1fa0922347238f46480" Namespace="kube-system" Pod="coredns-76f75df574-jqxcg" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--jqxcg-" Dec 13 01:59:29.343558 env[1308]: 2024-12-13 01:59:29.264 [INFO][4334] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="01f838c17445c3f330ad174bea58a82872c28b457322d1fa0922347238f46480" Namespace="kube-system" Pod="coredns-76f75df574-jqxcg" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--jqxcg-eth0" Dec 13 01:59:29.343558 env[1308]: 2024-12-13 01:59:29.291 [INFO][4347] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="01f838c17445c3f330ad174bea58a82872c28b457322d1fa0922347238f46480" HandleID="k8s-pod-network.01f838c17445c3f330ad174bea58a82872c28b457322d1fa0922347238f46480" Workload="localhost-k8s-coredns--76f75df574--jqxcg-eth0" Dec 13 01:59:29.343558 env[1308]: 2024-12-13 01:59:29.301 [INFO][4347] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="01f838c17445c3f330ad174bea58a82872c28b457322d1fa0922347238f46480" HandleID="k8s-pod-network.01f838c17445c3f330ad174bea58a82872c28b457322d1fa0922347238f46480" Workload="localhost-k8s-coredns--76f75df574--jqxcg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00043caf0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-jqxcg", "timestamp":"2024-12-13 01:59:29.291074095 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:59:29.343558 env[1308]: 2024-12-13 01:59:29.301 [INFO][4347] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:59:29.343558 env[1308]: 2024-12-13 01:59:29.301 [INFO][4347] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:59:29.343558 env[1308]: 2024-12-13 01:59:29.301 [INFO][4347] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 01:59:29.343558 env[1308]: 2024-12-13 01:59:29.303 [INFO][4347] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.01f838c17445c3f330ad174bea58a82872c28b457322d1fa0922347238f46480" host="localhost" Dec 13 01:59:29.343558 env[1308]: 2024-12-13 01:59:29.307 [INFO][4347] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 01:59:29.343558 env[1308]: 2024-12-13 01:59:29.311 [INFO][4347] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 01:59:29.343558 env[1308]: 2024-12-13 01:59:29.313 [INFO][4347] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 01:59:29.343558 env[1308]: 2024-12-13 01:59:29.315 [INFO][4347] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 01:59:29.343558 env[1308]: 2024-12-13 01:59:29.315 [INFO][4347] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.01f838c17445c3f330ad174bea58a82872c28b457322d1fa0922347238f46480" host="localhost" Dec 13 01:59:29.343558 env[1308]: 2024-12-13 01:59:29.317 [INFO][4347] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.01f838c17445c3f330ad174bea58a82872c28b457322d1fa0922347238f46480 Dec 13 01:59:29.343558 env[1308]: 2024-12-13 01:59:29.320 [INFO][4347] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.01f838c17445c3f330ad174bea58a82872c28b457322d1fa0922347238f46480" host="localhost" Dec 13 01:59:29.343558 env[1308]: 2024-12-13 01:59:29.325 [INFO][4347] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.01f838c17445c3f330ad174bea58a82872c28b457322d1fa0922347238f46480" host="localhost" Dec 13 01:59:29.343558 env[1308]: 2024-12-13 01:59:29.325 [INFO][4347] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.01f838c17445c3f330ad174bea58a82872c28b457322d1fa0922347238f46480" host="localhost" Dec 13 01:59:29.343558 env[1308]: 2024-12-13 01:59:29.325 [INFO][4347] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:59:29.343558 env[1308]: 2024-12-13 01:59:29.325 [INFO][4347] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="01f838c17445c3f330ad174bea58a82872c28b457322d1fa0922347238f46480" HandleID="k8s-pod-network.01f838c17445c3f330ad174bea58a82872c28b457322d1fa0922347238f46480" Workload="localhost-k8s-coredns--76f75df574--jqxcg-eth0" Dec 13 01:59:29.344354 env[1308]: 2024-12-13 01:59:29.327 [INFO][4334] cni-plugin/k8s.go 386: Populated endpoint ContainerID="01f838c17445c3f330ad174bea58a82872c28b457322d1fa0922347238f46480" Namespace="kube-system" Pod="coredns-76f75df574-jqxcg" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--jqxcg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--jqxcg-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"0356ebac-7712-4e16-9963-c87ca7672297", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 58, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-jqxcg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2783d35463a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:59:29.344354 env[1308]: 2024-12-13 01:59:29.328 [INFO][4334] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="01f838c17445c3f330ad174bea58a82872c28b457322d1fa0922347238f46480" Namespace="kube-system" Pod="coredns-76f75df574-jqxcg" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--jqxcg-eth0" Dec 13 01:59:29.344354 env[1308]: 2024-12-13 01:59:29.328 [INFO][4334] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2783d35463a ContainerID="01f838c17445c3f330ad174bea58a82872c28b457322d1fa0922347238f46480" Namespace="kube-system" Pod="coredns-76f75df574-jqxcg" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--jqxcg-eth0" Dec 13 01:59:29.344354 env[1308]: 2024-12-13 01:59:29.331 [INFO][4334] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="01f838c17445c3f330ad174bea58a82872c28b457322d1fa0922347238f46480" Namespace="kube-system" Pod="coredns-76f75df574-jqxcg" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--jqxcg-eth0" Dec 13 01:59:29.344354 env[1308]: 2024-12-13 01:59:29.331 [INFO][4334] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="01f838c17445c3f330ad174bea58a82872c28b457322d1fa0922347238f46480" Namespace="kube-system" Pod="coredns-76f75df574-jqxcg" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--jqxcg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--jqxcg-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"0356ebac-7712-4e16-9963-c87ca7672297", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 58, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"01f838c17445c3f330ad174bea58a82872c28b457322d1fa0922347238f46480", Pod:"coredns-76f75df574-jqxcg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2783d35463a", MAC:"3e:8e:bb:22:48:7e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:59:29.344354 env[1308]: 2024-12-13 01:59:29.341 [INFO][4334] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="01f838c17445c3f330ad174bea58a82872c28b457322d1fa0922347238f46480" Namespace="kube-system" Pod="coredns-76f75df574-jqxcg" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--jqxcg-eth0" Dec 13 01:59:29.352000 audit[4370]: NETFILTER_CFG table=filter:110 family=2 entries=52 op=nft_register_chain pid=4370 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 01:59:29.352000 audit[4370]: SYSCALL arch=c000003e syscall=46 success=yes exit=24636 a0=3 a1=7fffb94f2c00 a2=0 a3=7fffb94f2bec items=0 ppid=3436 pid=4370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:29.358300 env[1308]: time="2024-12-13T01:59:29.357620817Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:59:29.358300 env[1308]: time="2024-12-13T01:59:29.357672765Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:59:29.358300 env[1308]: time="2024-12-13T01:59:29.357683555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:59:29.358300 env[1308]: time="2024-12-13T01:59:29.357876477Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/01f838c17445c3f330ad174bea58a82872c28b457322d1fa0922347238f46480 pid=4377 runtime=io.containerd.runc.v2 Dec 13 01:59:29.361865 kernel: audit: type=1325 audit(1734055169.352:426): table=filter:110 family=2 entries=52 op=nft_register_chain pid=4370 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 01:59:29.361915 kernel: audit: type=1300 audit(1734055169.352:426): arch=c000003e syscall=46 success=yes exit=24636 a0=3 a1=7fffb94f2c00 a2=0 a3=7fffb94f2bec items=0 ppid=3436 pid=4370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:29.365106 kernel: audit: type=1327 audit(1734055169.352:426): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 01:59:29.352000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 01:59:29.390097 systemd-resolved[1225]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:59:29.426136 env[1308]: time="2024-12-13T01:59:29.425526751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-jqxcg,Uid:0356ebac-7712-4e16-9963-c87ca7672297,Namespace:kube-system,Attempt:1,} returns sandbox id \"01f838c17445c3f330ad174bea58a82872c28b457322d1fa0922347238f46480\"" Dec 13 01:59:29.426351 kubelet[2213]: E1213 01:59:29.426249 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:59:29.428044 env[1308]: time="2024-12-13T01:59:29.428000503Z" level=info msg="CreateContainer within sandbox \"01f838c17445c3f330ad174bea58a82872c28b457322d1fa0922347238f46480\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:59:29.472555 env[1308]: time="2024-12-13T01:59:29.472430953Z" level=info msg="CreateContainer within sandbox \"01f838c17445c3f330ad174bea58a82872c28b457322d1fa0922347238f46480\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"210af704d15a8802cd2acd76c29984c9ac39cf12b36d090a929664438c567f62\"" Dec 13 01:59:29.473106 env[1308]: time="2024-12-13T01:59:29.473077405Z" level=info msg="StartContainer for \"210af704d15a8802cd2acd76c29984c9ac39cf12b36d090a929664438c567f62\"" Dec 13 01:59:29.524683 env[1308]: time="2024-12-13T01:59:29.524610905Z" level=info msg="StartContainer for \"210af704d15a8802cd2acd76c29984c9ac39cf12b36d090a929664438c567f62\" returns successfully" Dec 13 01:59:29.738892 systemd[1]: Started sshd@10-10.0.0.48:22-10.0.0.1:34192.service. Dec 13 01:59:29.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.48:22-10.0.0.1:34192 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:29.743795 kernel: audit: type=1130 audit(1734055169.738:427): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.48:22-10.0.0.1:34192 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:29.769000 audit[4446]: USER_ACCT pid=4446 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:29.770926 sshd[4446]: Accepted publickey for core from 10.0.0.1 port 34192 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 01:59:29.770000 audit[4446]: CRED_ACQ pid=4446 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:29.771000 audit[4446]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc85422e20 a2=3 a3=0 items=0 ppid=1 pid=4446 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:29.771000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 01:59:29.772214 sshd[4446]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:59:29.776028 systemd-logind[1291]: New session 11 of user core. Dec 13 01:59:29.777007 systemd[1]: Started session-11.scope. Dec 13 01:59:29.780000 audit[4446]: USER_START pid=4446 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:29.782000 audit[4449]: CRED_ACQ pid=4449 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:29.896103 kubelet[2213]: E1213 01:59:29.894757 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:59:29.896103 kubelet[2213]: E1213 01:59:29.895336 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:59:29.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.48:22-10.0.0.1:34202 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:29.903977 systemd[1]: Started sshd@11-10.0.0.48:22-10.0.0.1:34202.service. Dec 13 01:59:29.907518 sshd[4446]: pam_unix(sshd:session): session closed for user core Dec 13 01:59:29.910000 audit[4446]: USER_END pid=4446 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:29.911000 audit[4446]: CRED_DISP pid=4446 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:29.915138 systemd[1]: sshd@10-10.0.0.48:22-10.0.0.1:34192.service: Deactivated successfully. Dec 13 01:59:29.914000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.48:22-10.0.0.1:34192 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:29.916616 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:59:29.917153 systemd-logind[1291]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:59:29.918904 systemd-logind[1291]: Removed session 11. Dec 13 01:59:29.924000 audit[4463]: NETFILTER_CFG table=filter:111 family=2 entries=10 op=nft_register_rule pid=4463 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:59:29.924000 audit[4463]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffc873ee540 a2=0 a3=7ffc873ee52c items=0 ppid=2382 pid=4463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:29.924000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:59:29.926723 kubelet[2213]: I1213 01:59:29.926439 2213 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-jqxcg" podStartSLOduration=36.92639425 podStartE2EDuration="36.92639425s" podCreationTimestamp="2024-12-13 01:58:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:59:29.90847887 +0000 UTC m=+50.276827137" watchObservedRunningTime="2024-12-13 01:59:29.92639425 +0000 UTC m=+50.294742477" Dec 13 01:59:29.928000 audit[4463]: NETFILTER_CFG table=nat:112 family=2 entries=44 op=nft_register_rule pid=4463 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:59:29.928000 audit[4463]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffc873ee540 a2=0 a3=7ffc873ee52c items=0 ppid=2382 pid=4463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:29.928000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:59:29.950000 audit[4459]: USER_ACCT pid=4459 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:29.951702 sshd[4459]: Accepted publickey for core from 10.0.0.1 port 34202 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 01:59:29.952000 audit[4459]: CRED_ACQ pid=4459 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:29.952000 audit[4459]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffec5937bf0 a2=3 a3=0 items=0 ppid=1 pid=4459 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:29.952000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 01:59:29.953630 sshd[4459]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:59:29.956992 systemd-logind[1291]: New session 12 of user core. Dec 13 01:59:29.957717 systemd[1]: Started session-12.scope. Dec 13 01:59:29.961000 audit[4459]: USER_START pid=4459 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:29.963000 audit[4467]: CRED_ACQ pid=4467 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:30.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.48:22-10.0.0.1:34214 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:30.142097 systemd[1]: Started sshd@12-10.0.0.48:22-10.0.0.1:34214.service. Dec 13 01:59:30.258280 sshd[4459]: pam_unix(sshd:session): session closed for user core Dec 13 01:59:30.260000 audit[4459]: USER_END pid=4459 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:30.260000 audit[4459]: CRED_DISP pid=4459 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:30.268896 systemd[1]: sshd@11-10.0.0.48:22-10.0.0.1:34202.service: Deactivated successfully. Dec 13 01:59:30.268000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.48:22-10.0.0.1:34202 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:30.271426 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:59:30.272014 systemd-logind[1291]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:59:30.274476 systemd-logind[1291]: Removed session 12. Dec 13 01:59:30.312000 audit[4474]: USER_ACCT pid=4474 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:30.313625 sshd[4474]: Accepted publickey for core from 10.0.0.1 port 34214 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 01:59:30.314000 audit[4474]: CRED_ACQ pid=4474 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:30.314000 audit[4474]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffc8228040 a2=3 a3=0 items=0 ppid=1 pid=4474 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:30.314000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 01:59:30.315551 sshd[4474]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:59:30.320451 systemd-logind[1291]: New session 13 of user core. Dec 13 01:59:30.321461 systemd[1]: Started session-13.scope. Dec 13 01:59:30.326000 audit[4474]: USER_START pid=4474 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:30.327000 audit[4479]: CRED_ACQ pid=4479 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:30.435140 sshd[4474]: pam_unix(sshd:session): session closed for user core Dec 13 01:59:30.435000 audit[4474]: USER_END pid=4474 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:30.435000 audit[4474]: CRED_DISP pid=4474 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:30.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.48:22-10.0.0.1:34214 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:30.437128 systemd[1]: sshd@12-10.0.0.48:22-10.0.0.1:34214.service: Deactivated successfully. Dec 13 01:59:30.438115 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:59:30.438154 systemd-logind[1291]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:59:30.438994 systemd-logind[1291]: Removed session 13. Dec 13 01:59:30.490904 systemd-networkd[1081]: cali05712ea44be: Gained IPv6LL Dec 13 01:59:30.902796 kubelet[2213]: E1213 01:59:30.901923 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:59:30.902796 kubelet[2213]: E1213 01:59:30.902715 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:59:30.966000 audit[4491]: NETFILTER_CFG table=filter:113 family=2 entries=10 op=nft_register_rule pid=4491 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:59:30.966000 audit[4491]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffc9b113670 a2=0 a3=7ffc9b11365c items=0 ppid=2382 pid=4491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:30.966000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:59:30.992000 audit[4491]: NETFILTER_CFG table=nat:114 family=2 entries=56 op=nft_register_chain pid=4491 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:59:30.992000 audit[4491]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffc9b113670 a2=0 a3=7ffc9b11365c items=0 ppid=2382 pid=4491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:30.992000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:59:31.070087 systemd-networkd[1081]: cali2783d35463a: Gained IPv6LL Dec 13 01:59:31.160099 env[1308]: time="2024-12-13T01:59:31.159960765Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:59:31.164664 env[1308]: time="2024-12-13T01:59:31.164601403Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:59:31.166596 env[1308]: time="2024-12-13T01:59:31.166566841Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:59:31.168553 env[1308]: time="2024-12-13T01:59:31.168500560Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:59:31.169080 env[1308]: time="2024-12-13T01:59:31.169049971Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Dec 13 01:59:31.171654 env[1308]: time="2024-12-13T01:59:31.171618791Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 01:59:31.181347 env[1308]: time="2024-12-13T01:59:31.175644444Z" level=info msg="CreateContainer within sandbox \"3f2b6191f84a72fdbafc6efe51af224edc248d09c2fcabeccbba94254e9cb061\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 01:59:31.198924 env[1308]: time="2024-12-13T01:59:31.198862302Z" level=info msg="CreateContainer within sandbox \"3f2b6191f84a72fdbafc6efe51af224edc248d09c2fcabeccbba94254e9cb061\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"5ac9595f16d67534b7a103edf07cb0470b1066c120ce2c74041ad8d264ccbc01\"" Dec 13 01:59:31.199822 env[1308]: time="2024-12-13T01:59:31.199793049Z" level=info msg="StartContainer for \"5ac9595f16d67534b7a103edf07cb0470b1066c120ce2c74041ad8d264ccbc01\"" Dec 13 01:59:31.458067 env[1308]: time="2024-12-13T01:59:31.457854654Z" level=info msg="StartContainer for \"5ac9595f16d67534b7a103edf07cb0470b1066c120ce2c74041ad8d264ccbc01\" returns successfully" Dec 13 01:59:31.914597 kubelet[2213]: E1213 01:59:31.911568 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:59:31.951450 kubelet[2213]: I1213 01:59:31.949486 2213 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6ff7c669bd-mkmgc" podStartSLOduration=26.868143034 podStartE2EDuration="31.949235946s" podCreationTimestamp="2024-12-13 01:59:00 +0000 UTC" firstStartedPulling="2024-12-13 01:59:26.088894538 +0000 UTC m=+46.457242775" lastFinishedPulling="2024-12-13 01:59:31.16998745 +0000 UTC m=+51.538335687" observedRunningTime="2024-12-13 01:59:31.9489187 +0000 UTC m=+52.317266967" watchObservedRunningTime="2024-12-13 01:59:31.949235946 +0000 UTC m=+52.317584183" Dec 13 01:59:34.745301 env[1308]: time="2024-12-13T01:59:34.745234982Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:59:34.747963 env[1308]: time="2024-12-13T01:59:34.747924118Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:59:34.750451 env[1308]: time="2024-12-13T01:59:34.750396056Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:59:34.752638 env[1308]: time="2024-12-13T01:59:34.752580074Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:59:34.753130 env[1308]: time="2024-12-13T01:59:34.753093196Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 01:59:34.753907 env[1308]: time="2024-12-13T01:59:34.753805092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 01:59:34.755163 env[1308]: time="2024-12-13T01:59:34.755126962Z" level=info msg="CreateContainer within sandbox \"af6c214d6a9c2e9ec35fcfd8ee70671f89a8925aef7c40d4165aaf878cce43c7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:59:34.771487 env[1308]: time="2024-12-13T01:59:34.771419572Z" level=info msg="CreateContainer within sandbox \"af6c214d6a9c2e9ec35fcfd8ee70671f89a8925aef7c40d4165aaf878cce43c7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"cae8134f34d20e689a9fd62facf733616c94539848d8fba345e8561cfe23b841\"" Dec 13 01:59:34.772104 env[1308]: time="2024-12-13T01:59:34.772069682Z" level=info msg="StartContainer for \"cae8134f34d20e689a9fd62facf733616c94539848d8fba345e8561cfe23b841\"" Dec 13 01:59:34.841782 env[1308]: time="2024-12-13T01:59:34.841711469Z" level=info msg="StartContainer for \"cae8134f34d20e689a9fd62facf733616c94539848d8fba345e8561cfe23b841\" returns successfully" Dec 13 01:59:34.932900 kubelet[2213]: I1213 01:59:34.932856 2213 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-76c6f6c975-rhnlz" podStartSLOduration=26.460447401 podStartE2EDuration="34.932810315s" podCreationTimestamp="2024-12-13 01:59:00 +0000 UTC" firstStartedPulling="2024-12-13 01:59:26.281097452 +0000 UTC m=+46.649445689" lastFinishedPulling="2024-12-13 01:59:34.753460366 +0000 UTC m=+55.121808603" observedRunningTime="2024-12-13 01:59:34.932438377 +0000 UTC m=+55.300786614" watchObservedRunningTime="2024-12-13 01:59:34.932810315 +0000 UTC m=+55.301158552" Dec 13 01:59:34.943000 audit[4596]: NETFILTER_CFG table=filter:115 family=2 entries=10 op=nft_register_rule pid=4596 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:59:34.945143 kernel: kauditd_printk_skb: 44 callbacks suppressed Dec 13 01:59:34.945209 kernel: audit: type=1325 audit(1734055174.943:458): table=filter:115 family=2 entries=10 op=nft_register_rule pid=4596 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:59:34.943000 audit[4596]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffdeea43740 a2=0 a3=7ffdeea4372c items=0 ppid=2382 pid=4596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:34.952603 kernel: audit: type=1300 audit(1734055174.943:458): arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffdeea43740 a2=0 a3=7ffdeea4372c items=0 ppid=2382 pid=4596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:34.943000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:59:34.955797 kernel: audit: type=1327 audit(1734055174.943:458): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:59:34.954000 audit[4596]: NETFILTER_CFG table=nat:116 family=2 entries=20 op=nft_register_rule pid=4596 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:59:34.954000 audit[4596]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffdeea43740 a2=0 a3=7ffdeea4372c items=0 ppid=2382 pid=4596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:34.963457 kernel: audit: type=1325 audit(1734055174.954:459): table=nat:116 family=2 entries=20 op=nft_register_rule pid=4596 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:59:34.963530 kernel: audit: type=1300 audit(1734055174.954:459): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffdeea43740 a2=0 a3=7ffdeea4372c items=0 ppid=2382 pid=4596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:34.963551 kernel: audit: type=1327 audit(1734055174.954:459): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:59:34.954000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:59:35.438498 systemd[1]: Started sshd@13-10.0.0.48:22-10.0.0.1:34230.service. Dec 13 01:59:35.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.48:22-10.0.0.1:34230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:35.444801 kernel: audit: type=1130 audit(1734055175.437:460): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.48:22-10.0.0.1:34230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:35.471000 audit[4597]: USER_ACCT pid=4597 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:35.477456 sshd[4597]: Accepted publickey for core from 10.0.0.1 port 34230 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 01:59:35.477802 kernel: audit: type=1101 audit(1734055175.471:461): pid=4597 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:35.476000 audit[4597]: CRED_ACQ pid=4597 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:35.478190 sshd[4597]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:59:35.486183 kernel: audit: type=1103 audit(1734055175.476:462): pid=4597 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:35.486321 kernel: audit: type=1006 audit(1734055175.476:463): pid=4597 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Dec 13 01:59:35.476000 audit[4597]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd623862a0 a2=3 a3=0 items=0 ppid=1 pid=4597 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:35.476000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 01:59:35.485912 systemd[1]: Started session-14.scope. Dec 13 01:59:35.487778 systemd-logind[1291]: New session 14 of user core. Dec 13 01:59:35.493000 audit[4597]: USER_START pid=4597 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:35.495000 audit[4600]: CRED_ACQ pid=4600 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:35.612290 sshd[4597]: pam_unix(sshd:session): session closed for user core Dec 13 01:59:35.612000 audit[4597]: USER_END pid=4597 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:35.612000 audit[4597]: CRED_DISP pid=4597 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:35.614903 systemd[1]: sshd@13-10.0.0.48:22-10.0.0.1:34230.service: Deactivated successfully. Dec 13 01:59:35.615950 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:59:35.614000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.48:22-10.0.0.1:34230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:35.618606 systemd-logind[1291]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:59:35.619581 systemd-logind[1291]: Removed session 14. Dec 13 01:59:35.892000 audit[4612]: NETFILTER_CFG table=filter:117 family=2 entries=9 op=nft_register_rule pid=4612 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:59:35.892000 audit[4612]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffc89dac930 a2=0 a3=7ffc89dac91c items=0 ppid=2382 pid=4612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:35.892000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:59:35.896000 audit[4612]: NETFILTER_CFG table=nat:118 family=2 entries=27 op=nft_register_chain pid=4612 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:59:35.896000 audit[4612]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffc89dac930 a2=0 a3=7ffc89dac91c items=0 ppid=2382 pid=4612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:35.896000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:59:36.773624 env[1308]: time="2024-12-13T01:59:36.772751378Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:59:36.775960 env[1308]: time="2024-12-13T01:59:36.775297644Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:59:36.785081 env[1308]: time="2024-12-13T01:59:36.784678462Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:59:36.787368 env[1308]: time="2024-12-13T01:59:36.787293818Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:59:36.788110 env[1308]: time="2024-12-13T01:59:36.788044512Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Dec 13 01:59:36.789915 env[1308]: time="2024-12-13T01:59:36.789883605Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 01:59:36.795501 env[1308]: time="2024-12-13T01:59:36.795288766Z" level=info msg="CreateContainer within sandbox \"d76f24044d3ac055e86eaf60d6ca255137c59e5f97cea51f6ae25b42a366acf3\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 01:59:36.856374 env[1308]: time="2024-12-13T01:59:36.856279828Z" level=info msg="CreateContainer within sandbox \"d76f24044d3ac055e86eaf60d6ca255137c59e5f97cea51f6ae25b42a366acf3\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"23e6371d2bf792232148cc2a81bc69a26158fa6c7eff2e1f72e53802839c601c\"" Dec 13 01:59:36.858530 env[1308]: time="2024-12-13T01:59:36.857415226Z" level=info msg="StartContainer for \"23e6371d2bf792232148cc2a81bc69a26158fa6c7eff2e1f72e53802839c601c\"" Dec 13 01:59:36.964467 env[1308]: time="2024-12-13T01:59:36.964340370Z" level=info msg="StartContainer for \"23e6371d2bf792232148cc2a81bc69a26158fa6c7eff2e1f72e53802839c601c\" returns successfully" Dec 13 01:59:37.205204 env[1308]: time="2024-12-13T01:59:37.205120065Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:59:37.208603 env[1308]: time="2024-12-13T01:59:37.208541535Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:59:37.212208 env[1308]: time="2024-12-13T01:59:37.211317112Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:59:37.214518 env[1308]: time="2024-12-13T01:59:37.214459651Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 01:59:37.216573 env[1308]: time="2024-12-13T01:59:37.215705539Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 01:59:37.222658 env[1308]: time="2024-12-13T01:59:37.222595450Z" level=info msg="CreateContainer within sandbox \"2566c02b8a612864d3a26278d529654f7c4da3b2e0b855a99e8b790b608d2cd7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:59:37.277175 env[1308]: time="2024-12-13T01:59:37.277059807Z" level=info msg="CreateContainer within sandbox \"2566c02b8a612864d3a26278d529654f7c4da3b2e0b855a99e8b790b608d2cd7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"fec7ca4b6872641a05e62292cc7f22fc254dca1c03fe3e11af4558a780e4b7bc\"" Dec 13 01:59:37.281227 env[1308]: time="2024-12-13T01:59:37.277988911Z" level=info msg="StartContainer for \"fec7ca4b6872641a05e62292cc7f22fc254dca1c03fe3e11af4558a780e4b7bc\"" Dec 13 01:59:37.395183 env[1308]: time="2024-12-13T01:59:37.394113801Z" level=info msg="StartContainer for \"fec7ca4b6872641a05e62292cc7f22fc254dca1c03fe3e11af4558a780e4b7bc\" returns successfully" Dec 13 01:59:37.839962 kubelet[2213]: I1213 01:59:37.839903 2213 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 01:59:37.841800 kubelet[2213]: I1213 01:59:37.841780 2213 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 01:59:37.944478 kubelet[2213]: I1213 01:59:37.944437 2213 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-z69kl" podStartSLOduration=27.163035422 podStartE2EDuration="37.944398896s" podCreationTimestamp="2024-12-13 01:59:00 +0000 UTC" firstStartedPulling="2024-12-13 01:59:26.007151544 +0000 UTC m=+46.375499782" lastFinishedPulling="2024-12-13 01:59:36.788515019 +0000 UTC m=+57.156863256" observedRunningTime="2024-12-13 01:59:37.943771589 +0000 UTC m=+58.312119816" watchObservedRunningTime="2024-12-13 01:59:37.944398896 +0000 UTC m=+58.312747123" Dec 13 01:59:37.954361 kubelet[2213]: I1213 01:59:37.954318 2213 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-76c6f6c975-mmn2h" podStartSLOduration=29.380118883 podStartE2EDuration="37.954259192s" podCreationTimestamp="2024-12-13 01:59:00 +0000 UTC" firstStartedPulling="2024-12-13 01:59:28.642698969 +0000 UTC m=+49.011047207" lastFinishedPulling="2024-12-13 01:59:37.216839269 +0000 UTC m=+57.585187516" observedRunningTime="2024-12-13 01:59:37.953574182 +0000 UTC m=+58.321922419" watchObservedRunningTime="2024-12-13 01:59:37.954259192 +0000 UTC m=+58.322607429" Dec 13 01:59:37.963000 audit[4689]: NETFILTER_CFG table=filter:119 family=2 entries=8 op=nft_register_rule pid=4689 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:59:37.963000 audit[4689]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffc6346eaa0 a2=0 a3=7ffc6346ea8c items=0 ppid=2382 pid=4689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:37.963000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:59:37.971000 audit[4689]: NETFILTER_CFG table=nat:120 family=2 entries=30 op=nft_register_rule pid=4689 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:59:37.971000 audit[4689]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffc6346eaa0 a2=0 a3=7ffc6346ea8c items=0 ppid=2382 pid=4689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:37.971000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:59:38.187000 audit[4691]: NETFILTER_CFG table=filter:121 family=2 entries=8 op=nft_register_rule pid=4691 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:59:38.187000 audit[4691]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7fff5310c270 a2=0 a3=7fff5310c25c items=0 ppid=2382 pid=4691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:38.187000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:59:38.194000 audit[4691]: NETFILTER_CFG table=nat:122 family=2 entries=34 op=nft_register_chain pid=4691 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:59:38.194000 audit[4691]: SYSCALL arch=c000003e syscall=46 success=yes exit=11236 a0=3 a1=7fff5310c270 a2=0 a3=7fff5310c25c items=0 ppid=2382 pid=4691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:38.194000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:59:39.732882 env[1308]: time="2024-12-13T01:59:39.731846608Z" level=info msg="StopPodSandbox for \"f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c\"" Dec 13 01:59:39.925982 env[1308]: 2024-12-13 01:59:39.831 [WARNING][4710] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--dfz7m-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"d5f89f4f-4968-46b5-9f7b-bc1d64f6aef1", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 58, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bc53cd1418ce2cf4d0f18fee993988babd90657c4cc3e1aa14ea73ed2ddc989d", Pod:"coredns-76f75df574-dfz7m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali16bf58cb103", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:59:39.925982 env[1308]: 2024-12-13 01:59:39.831 [INFO][4710] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c" Dec 13 01:59:39.925982 env[1308]: 2024-12-13 01:59:39.831 [INFO][4710] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c" iface="eth0" netns="" Dec 13 01:59:39.925982 env[1308]: 2024-12-13 01:59:39.831 [INFO][4710] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c" Dec 13 01:59:39.925982 env[1308]: 2024-12-13 01:59:39.831 [INFO][4710] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c" Dec 13 01:59:39.925982 env[1308]: 2024-12-13 01:59:39.873 [INFO][4721] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c" HandleID="k8s-pod-network.f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c" Workload="localhost-k8s-coredns--76f75df574--dfz7m-eth0" Dec 13 01:59:39.925982 env[1308]: 2024-12-13 01:59:39.873 [INFO][4721] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:59:39.925982 env[1308]: 2024-12-13 01:59:39.873 [INFO][4721] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:59:39.925982 env[1308]: 2024-12-13 01:59:39.897 [WARNING][4721] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c" HandleID="k8s-pod-network.f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c" Workload="localhost-k8s-coredns--76f75df574--dfz7m-eth0" Dec 13 01:59:39.925982 env[1308]: 2024-12-13 01:59:39.897 [INFO][4721] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c" HandleID="k8s-pod-network.f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c" Workload="localhost-k8s-coredns--76f75df574--dfz7m-eth0" Dec 13 01:59:39.925982 env[1308]: 2024-12-13 01:59:39.912 [INFO][4721] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:59:39.925982 env[1308]: 2024-12-13 01:59:39.922 [INFO][4710] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c" Dec 13 01:59:39.925982 env[1308]: time="2024-12-13T01:59:39.925406554Z" level=info msg="TearDown network for sandbox \"f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c\" successfully" Dec 13 01:59:39.925982 env[1308]: time="2024-12-13T01:59:39.925450078Z" level=info msg="StopPodSandbox for \"f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c\" returns successfully" Dec 13 01:59:39.927802 env[1308]: time="2024-12-13T01:59:39.927743481Z" level=info msg="RemovePodSandbox for \"f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c\"" Dec 13 01:59:39.927939 env[1308]: time="2024-12-13T01:59:39.927807595Z" level=info msg="Forcibly stopping sandbox \"f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c\"" Dec 13 01:59:40.109626 env[1308]: 2024-12-13 01:59:40.033 [WARNING][4747] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--dfz7m-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"d5f89f4f-4968-46b5-9f7b-bc1d64f6aef1", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 58, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bc53cd1418ce2cf4d0f18fee993988babd90657c4cc3e1aa14ea73ed2ddc989d", Pod:"coredns-76f75df574-dfz7m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali16bf58cb103", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:59:40.109626 env[1308]: 2024-12-13 01:59:40.034 [INFO][4747] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c" Dec 13 01:59:40.109626 env[1308]: 2024-12-13 01:59:40.034 [INFO][4747] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c" iface="eth0" netns="" Dec 13 01:59:40.109626 env[1308]: 2024-12-13 01:59:40.034 [INFO][4747] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c" Dec 13 01:59:40.109626 env[1308]: 2024-12-13 01:59:40.034 [INFO][4747] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c" Dec 13 01:59:40.109626 env[1308]: 2024-12-13 01:59:40.087 [INFO][4754] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c" HandleID="k8s-pod-network.f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c" Workload="localhost-k8s-coredns--76f75df574--dfz7m-eth0" Dec 13 01:59:40.109626 env[1308]: 2024-12-13 01:59:40.087 [INFO][4754] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:59:40.109626 env[1308]: 2024-12-13 01:59:40.087 [INFO][4754] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:59:40.109626 env[1308]: 2024-12-13 01:59:40.099 [WARNING][4754] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c" HandleID="k8s-pod-network.f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c" Workload="localhost-k8s-coredns--76f75df574--dfz7m-eth0" Dec 13 01:59:40.109626 env[1308]: 2024-12-13 01:59:40.099 [INFO][4754] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c" HandleID="k8s-pod-network.f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c" Workload="localhost-k8s-coredns--76f75df574--dfz7m-eth0" Dec 13 01:59:40.109626 env[1308]: 2024-12-13 01:59:40.102 [INFO][4754] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:59:40.109626 env[1308]: 2024-12-13 01:59:40.106 [INFO][4747] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c" Dec 13 01:59:40.109626 env[1308]: time="2024-12-13T01:59:40.108879551Z" level=info msg="TearDown network for sandbox \"f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c\" successfully" Dec 13 01:59:40.124200 env[1308]: time="2024-12-13T01:59:40.124107152Z" level=info msg="RemovePodSandbox \"f90a0bc7687ea0c71a95f357203c0cce4925c0cef9ff638c050c3c3c0f8e4d9c\" returns successfully" Dec 13 01:59:40.125356 env[1308]: time="2024-12-13T01:59:40.124760647Z" level=info msg="StopPodSandbox for \"38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d\"" Dec 13 01:59:40.372367 env[1308]: 2024-12-13 01:59:40.255 [WARNING][4773] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76c6f6c975--rhnlz-eth0", GenerateName:"calico-apiserver-76c6f6c975-", Namespace:"calico-apiserver", SelfLink:"", UID:"486a846a-be07-4723-8e84-72e633e51630", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 59, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76c6f6c975", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"af6c214d6a9c2e9ec35fcfd8ee70671f89a8925aef7c40d4165aaf878cce43c7", Pod:"calico-apiserver-76c6f6c975-rhnlz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie20cb9b7f2c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:59:40.372367 env[1308]: 2024-12-13 01:59:40.255 [INFO][4773] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d" Dec 13 01:59:40.372367 env[1308]: 2024-12-13 01:59:40.255 [INFO][4773] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d" iface="eth0" netns="" Dec 13 01:59:40.372367 env[1308]: 2024-12-13 01:59:40.255 [INFO][4773] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d" Dec 13 01:59:40.372367 env[1308]: 2024-12-13 01:59:40.255 [INFO][4773] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d" Dec 13 01:59:40.372367 env[1308]: 2024-12-13 01:59:40.319 [INFO][4780] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d" HandleID="k8s-pod-network.38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d" Workload="localhost-k8s-calico--apiserver--76c6f6c975--rhnlz-eth0" Dec 13 01:59:40.372367 env[1308]: 2024-12-13 01:59:40.319 [INFO][4780] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:59:40.372367 env[1308]: 2024-12-13 01:59:40.319 [INFO][4780] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:59:40.372367 env[1308]: 2024-12-13 01:59:40.338 [WARNING][4780] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d" HandleID="k8s-pod-network.38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d" Workload="localhost-k8s-calico--apiserver--76c6f6c975--rhnlz-eth0" Dec 13 01:59:40.372367 env[1308]: 2024-12-13 01:59:40.338 [INFO][4780] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d" HandleID="k8s-pod-network.38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d" Workload="localhost-k8s-calico--apiserver--76c6f6c975--rhnlz-eth0" Dec 13 01:59:40.372367 env[1308]: 2024-12-13 01:59:40.366 [INFO][4780] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:59:40.372367 env[1308]: 2024-12-13 01:59:40.370 [INFO][4773] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d" Dec 13 01:59:40.373000 env[1308]: time="2024-12-13T01:59:40.372368066Z" level=info msg="TearDown network for sandbox \"38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d\" successfully" Dec 13 01:59:40.373000 env[1308]: time="2024-12-13T01:59:40.372408183Z" level=info msg="StopPodSandbox for \"38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d\" returns successfully" Dec 13 01:59:40.373826 env[1308]: time="2024-12-13T01:59:40.373359986Z" level=info msg="RemovePodSandbox for \"38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d\"" Dec 13 01:59:40.373826 env[1308]: time="2024-12-13T01:59:40.373401115Z" level=info msg="Forcibly stopping sandbox \"38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d\"" Dec 13 01:59:40.556187 env[1308]: 2024-12-13 01:59:40.477 [WARNING][4804] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76c6f6c975--rhnlz-eth0", GenerateName:"calico-apiserver-76c6f6c975-", Namespace:"calico-apiserver", SelfLink:"", UID:"486a846a-be07-4723-8e84-72e633e51630", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 59, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76c6f6c975", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"af6c214d6a9c2e9ec35fcfd8ee70671f89a8925aef7c40d4165aaf878cce43c7", Pod:"calico-apiserver-76c6f6c975-rhnlz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie20cb9b7f2c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:59:40.556187 env[1308]: 2024-12-13 01:59:40.479 [INFO][4804] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d" Dec 13 01:59:40.556187 env[1308]: 2024-12-13 01:59:40.479 [INFO][4804] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d" iface="eth0" netns="" Dec 13 01:59:40.556187 env[1308]: 2024-12-13 01:59:40.479 [INFO][4804] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d" Dec 13 01:59:40.556187 env[1308]: 2024-12-13 01:59:40.479 [INFO][4804] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d" Dec 13 01:59:40.556187 env[1308]: 2024-12-13 01:59:40.520 [INFO][4811] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d" HandleID="k8s-pod-network.38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d" Workload="localhost-k8s-calico--apiserver--76c6f6c975--rhnlz-eth0" Dec 13 01:59:40.556187 env[1308]: 2024-12-13 01:59:40.522 [INFO][4811] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:59:40.556187 env[1308]: 2024-12-13 01:59:40.522 [INFO][4811] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:59:40.556187 env[1308]: 2024-12-13 01:59:40.540 [WARNING][4811] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d" HandleID="k8s-pod-network.38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d" Workload="localhost-k8s-calico--apiserver--76c6f6c975--rhnlz-eth0" Dec 13 01:59:40.556187 env[1308]: 2024-12-13 01:59:40.540 [INFO][4811] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d" HandleID="k8s-pod-network.38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d" Workload="localhost-k8s-calico--apiserver--76c6f6c975--rhnlz-eth0" Dec 13 01:59:40.556187 env[1308]: 2024-12-13 01:59:40.543 [INFO][4811] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:59:40.556187 env[1308]: 2024-12-13 01:59:40.553 [INFO][4804] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d" Dec 13 01:59:40.556187 env[1308]: time="2024-12-13T01:59:40.555669948Z" level=info msg="TearDown network for sandbox \"38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d\" successfully" Dec 13 01:59:40.564027 env[1308]: time="2024-12-13T01:59:40.563953851Z" level=info msg="RemovePodSandbox \"38781b7abae0ebfd72664d0396e266f8efc21e1f10de067f1aa7e04aa10ab54d\" returns successfully" Dec 13 01:59:40.565323 env[1308]: time="2024-12-13T01:59:40.564651822Z" level=info msg="StopPodSandbox for \"8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa\"" Dec 13 01:59:40.626928 kernel: kauditd_printk_skb: 25 callbacks suppressed Dec 13 01:59:40.627111 kernel: audit: type=1130 audit(1734055180.619:475): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.48:22-10.0.0.1:57282 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:40.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.48:22-10.0.0.1:57282 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:40.621043 systemd[1]: Started sshd@14-10.0.0.48:22-10.0.0.1:57282.service. Dec 13 01:59:40.710250 sshd[4842]: Accepted publickey for core from 10.0.0.1 port 57282 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 01:59:40.708000 audit[4842]: USER_ACCT pid=4842 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:40.715815 kernel: audit: type=1101 audit(1734055180.708:476): pid=4842 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:40.714000 audit[4842]: CRED_ACQ pid=4842 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:40.717324 sshd[4842]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:59:40.732564 kernel: audit: type=1103 audit(1734055180.714:477): pid=4842 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:40.734836 kernel: audit: type=1006 audit(1734055180.714:478): pid=4842 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Dec 13 01:59:40.734882 kernel: audit: type=1300 audit(1734055180.714:478): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe1cd5da50 a2=3 a3=0 items=0 ppid=1 pid=4842 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:40.734908 kernel: audit: type=1327 audit(1734055180.714:478): proctitle=737368643A20636F7265205B707269765D Dec 13 01:59:40.714000 audit[4842]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe1cd5da50 a2=3 a3=0 items=0 ppid=1 pid=4842 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:40.714000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 01:59:40.739256 systemd[1]: Started session-15.scope. Dec 13 01:59:40.739874 systemd-logind[1291]: New session 15 of user core. Dec 13 01:59:40.751530 env[1308]: 2024-12-13 01:59:40.673 [WARNING][4836] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--z69kl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7937f569-a24d-4eec-b55c-c7674aa42251", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 59, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d76f24044d3ac055e86eaf60d6ca255137c59e5f97cea51f6ae25b42a366acf3", Pod:"csi-node-driver-z69kl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali26be98e69f9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:59:40.751530 env[1308]: 2024-12-13 01:59:40.673 [INFO][4836] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa" Dec 13 01:59:40.751530 env[1308]: 2024-12-13 01:59:40.673 [INFO][4836] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa" iface="eth0" netns="" Dec 13 01:59:40.751530 env[1308]: 2024-12-13 01:59:40.673 [INFO][4836] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa" Dec 13 01:59:40.751530 env[1308]: 2024-12-13 01:59:40.673 [INFO][4836] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa" Dec 13 01:59:40.751530 env[1308]: 2024-12-13 01:59:40.720 [INFO][4845] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa" HandleID="k8s-pod-network.8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa" Workload="localhost-k8s-csi--node--driver--z69kl-eth0" Dec 13 01:59:40.751530 env[1308]: 2024-12-13 01:59:40.720 [INFO][4845] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:59:40.751530 env[1308]: 2024-12-13 01:59:40.720 [INFO][4845] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:59:40.751530 env[1308]: 2024-12-13 01:59:40.729 [WARNING][4845] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa" HandleID="k8s-pod-network.8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa" Workload="localhost-k8s-csi--node--driver--z69kl-eth0" Dec 13 01:59:40.751530 env[1308]: 2024-12-13 01:59:40.729 [INFO][4845] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa" HandleID="k8s-pod-network.8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa" Workload="localhost-k8s-csi--node--driver--z69kl-eth0" Dec 13 01:59:40.751530 env[1308]: 2024-12-13 01:59:40.739 [INFO][4845] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:59:40.751530 env[1308]: 2024-12-13 01:59:40.746 [INFO][4836] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa" Dec 13 01:59:40.751530 env[1308]: time="2024-12-13T01:59:40.750574807Z" level=info msg="TearDown network for sandbox \"8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa\" successfully" Dec 13 01:59:40.751530 env[1308]: time="2024-12-13T01:59:40.750617320Z" level=info msg="StopPodSandbox for \"8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa\" returns successfully" Dec 13 01:59:40.754562 env[1308]: time="2024-12-13T01:59:40.752960535Z" level=info msg="RemovePodSandbox for \"8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa\"" Dec 13 01:59:40.754562 env[1308]: time="2024-12-13T01:59:40.753001675Z" level=info msg="Forcibly stopping sandbox \"8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa\"" Dec 13 01:59:40.777000 audit[4842]: USER_START pid=4842 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:40.787947 kernel: audit: type=1105 audit(1734055180.777:479): pid=4842 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:40.788129 kernel: audit: type=1103 audit(1734055180.777:480): pid=4873 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:40.777000 audit[4873]: CRED_ACQ pid=4873 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:40.847709 env[1308]: 2024-12-13 01:59:40.809 [WARNING][4867] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--z69kl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7937f569-a24d-4eec-b55c-c7674aa42251", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 59, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d76f24044d3ac055e86eaf60d6ca255137c59e5f97cea51f6ae25b42a366acf3", Pod:"csi-node-driver-z69kl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali26be98e69f9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:59:40.847709 env[1308]: 2024-12-13 01:59:40.810 [INFO][4867] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa" Dec 13 01:59:40.847709 env[1308]: 2024-12-13 01:59:40.810 [INFO][4867] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa" iface="eth0" netns="" Dec 13 01:59:40.847709 env[1308]: 2024-12-13 01:59:40.810 [INFO][4867] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa" Dec 13 01:59:40.847709 env[1308]: 2024-12-13 01:59:40.810 [INFO][4867] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa" Dec 13 01:59:40.847709 env[1308]: 2024-12-13 01:59:40.835 [INFO][4875] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa" HandleID="k8s-pod-network.8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa" Workload="localhost-k8s-csi--node--driver--z69kl-eth0" Dec 13 01:59:40.847709 env[1308]: 2024-12-13 01:59:40.835 [INFO][4875] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:59:40.847709 env[1308]: 2024-12-13 01:59:40.835 [INFO][4875] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:59:40.847709 env[1308]: 2024-12-13 01:59:40.842 [WARNING][4875] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa" HandleID="k8s-pod-network.8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa" Workload="localhost-k8s-csi--node--driver--z69kl-eth0" Dec 13 01:59:40.847709 env[1308]: 2024-12-13 01:59:40.842 [INFO][4875] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa" HandleID="k8s-pod-network.8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa" Workload="localhost-k8s-csi--node--driver--z69kl-eth0" Dec 13 01:59:40.847709 env[1308]: 2024-12-13 01:59:40.843 [INFO][4875] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:59:40.847709 env[1308]: 2024-12-13 01:59:40.845 [INFO][4867] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa" Dec 13 01:59:40.847709 env[1308]: time="2024-12-13T01:59:40.847055878Z" level=info msg="TearDown network for sandbox \"8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa\" successfully" Dec 13 01:59:40.854142 env[1308]: time="2024-12-13T01:59:40.854085413Z" level=info msg="RemovePodSandbox \"8b8b0df0c0edd47872b987ecccb00443d71b1feaa3b14e4c3d68228795ca6ffa\" returns successfully" Dec 13 01:59:40.855291 env[1308]: time="2024-12-13T01:59:40.855247051Z" level=info msg="StopPodSandbox for \"c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05\"" Dec 13 01:59:40.918135 sshd[4842]: pam_unix(sshd:session): session closed for user core Dec 13 01:59:40.919000 audit[4842]: USER_END pid=4842 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:40.928305 kernel: audit: type=1106 audit(1734055180.919:481): pid=4842 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:40.929062 systemd-logind[1291]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:59:40.926000 audit[4842]: CRED_DISP pid=4842 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:40.937879 kernel: audit: type=1104 audit(1734055180.926:482): pid=4842 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:40.936166 systemd[1]: sshd@14-10.0.0.48:22-10.0.0.1:57282.service: Deactivated successfully. Dec 13 01:59:40.937393 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:59:40.939305 systemd-logind[1291]: Removed session 15. Dec 13 01:59:40.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.48:22-10.0.0.1:57282 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:40.967334 env[1308]: 2024-12-13 01:59:40.894 [WARNING][4905] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76c6f6c975--mmn2h-eth0", GenerateName:"calico-apiserver-76c6f6c975-", Namespace:"calico-apiserver", SelfLink:"", UID:"55f83b0b-5364-4958-b303-1b06d5dd6c20", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 59, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76c6f6c975", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2566c02b8a612864d3a26278d529654f7c4da3b2e0b855a99e8b790b608d2cd7", Pod:"calico-apiserver-76c6f6c975-mmn2h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali05712ea44be", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:59:40.967334 env[1308]: 2024-12-13 01:59:40.894 [INFO][4905] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05" Dec 13 01:59:40.967334 env[1308]: 2024-12-13 01:59:40.894 [INFO][4905] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05" iface="eth0" netns="" Dec 13 01:59:40.967334 env[1308]: 2024-12-13 01:59:40.894 [INFO][4905] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05" Dec 13 01:59:40.967334 env[1308]: 2024-12-13 01:59:40.894 [INFO][4905] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05" Dec 13 01:59:40.967334 env[1308]: 2024-12-13 01:59:40.941 [INFO][4914] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05" HandleID="k8s-pod-network.c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05" Workload="localhost-k8s-calico--apiserver--76c6f6c975--mmn2h-eth0" Dec 13 01:59:40.967334 env[1308]: 2024-12-13 01:59:40.942 [INFO][4914] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:59:40.967334 env[1308]: 2024-12-13 01:59:40.942 [INFO][4914] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:59:40.967334 env[1308]: 2024-12-13 01:59:40.950 [WARNING][4914] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05" HandleID="k8s-pod-network.c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05" Workload="localhost-k8s-calico--apiserver--76c6f6c975--mmn2h-eth0" Dec 13 01:59:40.967334 env[1308]: 2024-12-13 01:59:40.950 [INFO][4914] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05" HandleID="k8s-pod-network.c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05" Workload="localhost-k8s-calico--apiserver--76c6f6c975--mmn2h-eth0" Dec 13 01:59:40.967334 env[1308]: 2024-12-13 01:59:40.954 [INFO][4914] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:59:40.967334 env[1308]: 2024-12-13 01:59:40.957 [INFO][4905] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05" Dec 13 01:59:40.969269 env[1308]: time="2024-12-13T01:59:40.968015624Z" level=info msg="TearDown network for sandbox \"c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05\" successfully" Dec 13 01:59:40.969269 env[1308]: time="2024-12-13T01:59:40.968060159Z" level=info msg="StopPodSandbox for \"c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05\" returns successfully" Dec 13 01:59:40.969269 env[1308]: time="2024-12-13T01:59:40.968580686Z" level=info msg="RemovePodSandbox for \"c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05\"" Dec 13 01:59:40.969269 env[1308]: time="2024-12-13T01:59:40.968610965Z" level=info msg="Forcibly stopping sandbox \"c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05\"" Dec 13 01:59:41.096300 env[1308]: 2024-12-13 01:59:41.040 [WARNING][4938] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--76c6f6c975--mmn2h-eth0", GenerateName:"calico-apiserver-76c6f6c975-", Namespace:"calico-apiserver", SelfLink:"", UID:"55f83b0b-5364-4958-b303-1b06d5dd6c20", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 59, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76c6f6c975", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2566c02b8a612864d3a26278d529654f7c4da3b2e0b855a99e8b790b608d2cd7", Pod:"calico-apiserver-76c6f6c975-mmn2h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali05712ea44be", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:59:41.096300 env[1308]: 2024-12-13 01:59:41.040 [INFO][4938] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05" Dec 13 01:59:41.096300 env[1308]: 2024-12-13 01:59:41.040 [INFO][4938] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05" iface="eth0" netns="" Dec 13 01:59:41.096300 env[1308]: 2024-12-13 01:59:41.040 [INFO][4938] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05" Dec 13 01:59:41.096300 env[1308]: 2024-12-13 01:59:41.040 [INFO][4938] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05" Dec 13 01:59:41.096300 env[1308]: 2024-12-13 01:59:41.076 [INFO][4945] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05" HandleID="k8s-pod-network.c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05" Workload="localhost-k8s-calico--apiserver--76c6f6c975--mmn2h-eth0" Dec 13 01:59:41.096300 env[1308]: 2024-12-13 01:59:41.076 [INFO][4945] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:59:41.096300 env[1308]: 2024-12-13 01:59:41.077 [INFO][4945] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:59:41.096300 env[1308]: 2024-12-13 01:59:41.087 [WARNING][4945] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05" HandleID="k8s-pod-network.c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05" Workload="localhost-k8s-calico--apiserver--76c6f6c975--mmn2h-eth0" Dec 13 01:59:41.096300 env[1308]: 2024-12-13 01:59:41.087 [INFO][4945] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05" HandleID="k8s-pod-network.c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05" Workload="localhost-k8s-calico--apiserver--76c6f6c975--mmn2h-eth0" Dec 13 01:59:41.096300 env[1308]: 2024-12-13 01:59:41.091 [INFO][4945] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:59:41.096300 env[1308]: 2024-12-13 01:59:41.093 [INFO][4938] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05" Dec 13 01:59:41.096929 env[1308]: time="2024-12-13T01:59:41.096326348Z" level=info msg="TearDown network for sandbox \"c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05\" successfully" Dec 13 01:59:41.102586 env[1308]: time="2024-12-13T01:59:41.102531016Z" level=info msg="RemovePodSandbox \"c4e7b8e0bf913318ecc0fbd0a00b9520d7886a87b3238ff947e3fd43d3e03d05\" returns successfully" Dec 13 01:59:41.103747 env[1308]: time="2024-12-13T01:59:41.103689917Z" level=info msg="StopPodSandbox for \"979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28\"" Dec 13 01:59:41.188222 env[1308]: 2024-12-13 01:59:41.156 [WARNING][4968] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--jqxcg-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"0356ebac-7712-4e16-9963-c87ca7672297", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 58, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"01f838c17445c3f330ad174bea58a82872c28b457322d1fa0922347238f46480", Pod:"coredns-76f75df574-jqxcg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2783d35463a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:59:41.188222 env[1308]: 2024-12-13 01:59:41.157 [INFO][4968] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28" Dec 13 01:59:41.188222 env[1308]: 2024-12-13 01:59:41.157 [INFO][4968] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28" iface="eth0" netns="" Dec 13 01:59:41.188222 env[1308]: 2024-12-13 01:59:41.157 [INFO][4968] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28" Dec 13 01:59:41.188222 env[1308]: 2024-12-13 01:59:41.157 [INFO][4968] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28" Dec 13 01:59:41.188222 env[1308]: 2024-12-13 01:59:41.177 [INFO][4977] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28" HandleID="k8s-pod-network.979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28" Workload="localhost-k8s-coredns--76f75df574--jqxcg-eth0" Dec 13 01:59:41.188222 env[1308]: 2024-12-13 01:59:41.177 [INFO][4977] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:59:41.188222 env[1308]: 2024-12-13 01:59:41.177 [INFO][4977] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:59:41.188222 env[1308]: 2024-12-13 01:59:41.182 [WARNING][4977] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28" HandleID="k8s-pod-network.979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28" Workload="localhost-k8s-coredns--76f75df574--jqxcg-eth0" Dec 13 01:59:41.188222 env[1308]: 2024-12-13 01:59:41.182 [INFO][4977] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28" HandleID="k8s-pod-network.979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28" Workload="localhost-k8s-coredns--76f75df574--jqxcg-eth0" Dec 13 01:59:41.188222 env[1308]: 2024-12-13 01:59:41.184 [INFO][4977] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:59:41.188222 env[1308]: 2024-12-13 01:59:41.186 [INFO][4968] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28" Dec 13 01:59:41.188222 env[1308]: time="2024-12-13T01:59:41.188183697Z" level=info msg="TearDown network for sandbox \"979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28\" successfully" Dec 13 01:59:41.188222 env[1308]: time="2024-12-13T01:59:41.188215829Z" level=info msg="StopPodSandbox for \"979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28\" returns successfully" Dec 13 01:59:41.189621 env[1308]: time="2024-12-13T01:59:41.189089930Z" level=info msg="RemovePodSandbox for \"979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28\"" Dec 13 01:59:41.189621 env[1308]: time="2024-12-13T01:59:41.189137612Z" level=info msg="Forcibly stopping sandbox \"979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28\"" Dec 13 01:59:41.251221 env[1308]: 2024-12-13 01:59:41.224 [WARNING][5000] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--jqxcg-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"0356ebac-7712-4e16-9963-c87ca7672297", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 58, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"01f838c17445c3f330ad174bea58a82872c28b457322d1fa0922347238f46480", Pod:"coredns-76f75df574-jqxcg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2783d35463a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:59:41.251221 env[1308]: 2024-12-13 01:59:41.224 [INFO][5000] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28" Dec 13 01:59:41.251221 env[1308]: 2024-12-13 01:59:41.224 [INFO][5000] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28" iface="eth0" netns="" Dec 13 01:59:41.251221 env[1308]: 2024-12-13 01:59:41.224 [INFO][5000] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28" Dec 13 01:59:41.251221 env[1308]: 2024-12-13 01:59:41.224 [INFO][5000] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28" Dec 13 01:59:41.251221 env[1308]: 2024-12-13 01:59:41.241 [INFO][5008] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28" HandleID="k8s-pod-network.979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28" Workload="localhost-k8s-coredns--76f75df574--jqxcg-eth0" Dec 13 01:59:41.251221 env[1308]: 2024-12-13 01:59:41.241 [INFO][5008] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:59:41.251221 env[1308]: 2024-12-13 01:59:41.241 [INFO][5008] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:59:41.251221 env[1308]: 2024-12-13 01:59:41.247 [WARNING][5008] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28" HandleID="k8s-pod-network.979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28" Workload="localhost-k8s-coredns--76f75df574--jqxcg-eth0" Dec 13 01:59:41.251221 env[1308]: 2024-12-13 01:59:41.247 [INFO][5008] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28" HandleID="k8s-pod-network.979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28" Workload="localhost-k8s-coredns--76f75df574--jqxcg-eth0" Dec 13 01:59:41.251221 env[1308]: 2024-12-13 01:59:41.248 [INFO][5008] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:59:41.251221 env[1308]: 2024-12-13 01:59:41.250 [INFO][5000] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28" Dec 13 01:59:41.251894 env[1308]: time="2024-12-13T01:59:41.251245383Z" level=info msg="TearDown network for sandbox \"979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28\" successfully" Dec 13 01:59:41.254644 env[1308]: time="2024-12-13T01:59:41.254599419Z" level=info msg="RemovePodSandbox \"979c13085af9ea43644edf2ed921c4edb455c52536dacfe82ed22e69a8d5ff28\" returns successfully" Dec 13 01:59:41.255211 env[1308]: time="2024-12-13T01:59:41.255150635Z" level=info msg="StopPodSandbox for \"d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836\"" Dec 13 01:59:41.319900 env[1308]: 2024-12-13 01:59:41.291 [WARNING][5031] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6ff7c669bd--mkmgc-eth0", GenerateName:"calico-kube-controllers-6ff7c669bd-", Namespace:"calico-system", SelfLink:"", UID:"4746d340-1c7d-4392-8db5-c68575618d26", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 59, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6ff7c669bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3f2b6191f84a72fdbafc6efe51af224edc248d09c2fcabeccbba94254e9cb061", Pod:"calico-kube-controllers-6ff7c669bd-mkmgc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali68cbb84f4c5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:59:41.319900 env[1308]: 2024-12-13 01:59:41.292 [INFO][5031] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836" Dec 13 01:59:41.319900 env[1308]: 2024-12-13 01:59:41.292 [INFO][5031] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836" iface="eth0" netns="" Dec 13 01:59:41.319900 env[1308]: 2024-12-13 01:59:41.292 [INFO][5031] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836" Dec 13 01:59:41.319900 env[1308]: 2024-12-13 01:59:41.292 [INFO][5031] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836" Dec 13 01:59:41.319900 env[1308]: 2024-12-13 01:59:41.310 [INFO][5039] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836" HandleID="k8s-pod-network.d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836" Workload="localhost-k8s-calico--kube--controllers--6ff7c669bd--mkmgc-eth0" Dec 13 01:59:41.319900 env[1308]: 2024-12-13 01:59:41.310 [INFO][5039] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:59:41.319900 env[1308]: 2024-12-13 01:59:41.310 [INFO][5039] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:59:41.319900 env[1308]: 2024-12-13 01:59:41.315 [WARNING][5039] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836" HandleID="k8s-pod-network.d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836" Workload="localhost-k8s-calico--kube--controllers--6ff7c669bd--mkmgc-eth0" Dec 13 01:59:41.319900 env[1308]: 2024-12-13 01:59:41.315 [INFO][5039] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836" HandleID="k8s-pod-network.d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836" Workload="localhost-k8s-calico--kube--controllers--6ff7c669bd--mkmgc-eth0" Dec 13 01:59:41.319900 env[1308]: 2024-12-13 01:59:41.317 [INFO][5039] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:59:41.319900 env[1308]: 2024-12-13 01:59:41.318 [INFO][5031] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836" Dec 13 01:59:41.320413 env[1308]: time="2024-12-13T01:59:41.319941975Z" level=info msg="TearDown network for sandbox \"d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836\" successfully" Dec 13 01:59:41.320413 env[1308]: time="2024-12-13T01:59:41.319971172Z" level=info msg="StopPodSandbox for \"d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836\" returns successfully" Dec 13 01:59:41.320552 env[1308]: time="2024-12-13T01:59:41.320507679Z" level=info msg="RemovePodSandbox for \"d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836\"" Dec 13 01:59:41.320598 env[1308]: time="2024-12-13T01:59:41.320551935Z" level=info msg="Forcibly stopping sandbox \"d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836\"" Dec 13 01:59:41.400430 env[1308]: 2024-12-13 01:59:41.353 [WARNING][5062] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6ff7c669bd--mkmgc-eth0", GenerateName:"calico-kube-controllers-6ff7c669bd-", Namespace:"calico-system", SelfLink:"", UID:"4746d340-1c7d-4392-8db5-c68575618d26", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 59, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6ff7c669bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3f2b6191f84a72fdbafc6efe51af224edc248d09c2fcabeccbba94254e9cb061", Pod:"calico-kube-controllers-6ff7c669bd-mkmgc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali68cbb84f4c5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:59:41.400430 env[1308]: 2024-12-13 01:59:41.353 [INFO][5062] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836" Dec 13 01:59:41.400430 env[1308]: 2024-12-13 01:59:41.353 [INFO][5062] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836" iface="eth0" netns="" Dec 13 01:59:41.400430 env[1308]: 2024-12-13 01:59:41.353 [INFO][5062] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836" Dec 13 01:59:41.400430 env[1308]: 2024-12-13 01:59:41.353 [INFO][5062] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836" Dec 13 01:59:41.400430 env[1308]: 2024-12-13 01:59:41.383 [INFO][5070] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836" HandleID="k8s-pod-network.d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836" Workload="localhost-k8s-calico--kube--controllers--6ff7c669bd--mkmgc-eth0" Dec 13 01:59:41.400430 env[1308]: 2024-12-13 01:59:41.383 [INFO][5070] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:59:41.400430 env[1308]: 2024-12-13 01:59:41.383 [INFO][5070] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:59:41.400430 env[1308]: 2024-12-13 01:59:41.391 [WARNING][5070] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836" HandleID="k8s-pod-network.d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836" Workload="localhost-k8s-calico--kube--controllers--6ff7c669bd--mkmgc-eth0" Dec 13 01:59:41.400430 env[1308]: 2024-12-13 01:59:41.391 [INFO][5070] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836" HandleID="k8s-pod-network.d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836" Workload="localhost-k8s-calico--kube--controllers--6ff7c669bd--mkmgc-eth0" Dec 13 01:59:41.400430 env[1308]: 2024-12-13 01:59:41.393 [INFO][5070] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:59:41.400430 env[1308]: 2024-12-13 01:59:41.397 [INFO][5062] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836" Dec 13 01:59:41.401243 env[1308]: time="2024-12-13T01:59:41.400474936Z" level=info msg="TearDown network for sandbox \"d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836\" successfully" Dec 13 01:59:41.405937 env[1308]: time="2024-12-13T01:59:41.405834570Z" level=info msg="RemovePodSandbox \"d75b849d433e4c95b5fa38ec6af9f0dcb9f4c6305351cd07269cd841dbd45836\" returns successfully" Dec 13 01:59:45.921053 systemd[1]: Started sshd@15-10.0.0.48:22-10.0.0.1:57296.service. Dec 13 01:59:45.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.48:22-10.0.0.1:57296 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:45.922702 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 01:59:45.922771 kernel: audit: type=1130 audit(1734055185.920:484): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.48:22-10.0.0.1:57296 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:45.951000 audit[5104]: USER_ACCT pid=5104 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:45.952000 audit[5104]: CRED_ACQ pid=5104 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:45.953155 sshd[5104]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:59:45.957161 sshd[5104]: Accepted publickey for core from 10.0.0.1 port 57296 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 01:59:45.957599 systemd-logind[1291]: New session 16 of user core. Dec 13 01:59:45.958298 systemd[1]: Started session-16.scope. Dec 13 01:59:45.960535 kernel: audit: type=1101 audit(1734055185.951:485): pid=5104 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:45.960659 kernel: audit: type=1103 audit(1734055185.952:486): pid=5104 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:45.960682 kernel: audit: type=1006 audit(1734055185.952:487): pid=5104 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Dec 13 01:59:45.952000 audit[5104]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffae204420 a2=3 a3=0 items=0 ppid=1 pid=5104 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:45.967619 kernel: audit: type=1300 audit(1734055185.952:487): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffae204420 a2=3 a3=0 items=0 ppid=1 pid=5104 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:45.967676 kernel: audit: type=1327 audit(1734055185.952:487): proctitle=737368643A20636F7265205B707269765D Dec 13 01:59:45.952000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 01:59:45.969147 kernel: audit: type=1105 audit(1734055185.962:488): pid=5104 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:45.962000 audit[5104]: USER_START pid=5104 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:45.973803 kernel: audit: type=1103 audit(1734055185.963:489): pid=5107 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:45.963000 audit[5107]: CRED_ACQ pid=5107 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:46.073054 sshd[5104]: pam_unix(sshd:session): session closed for user core Dec 13 01:59:46.074000 audit[5104]: USER_END pid=5104 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:46.077566 systemd[1]: sshd@15-10.0.0.48:22-10.0.0.1:57296.service: Deactivated successfully. Dec 13 01:59:46.079451 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:59:46.081714 systemd-logind[1291]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:59:46.074000 audit[5104]: CRED_DISP pid=5104 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:46.083229 systemd-logind[1291]: Removed session 16. Dec 13 01:59:46.088790 kernel: audit: type=1106 audit(1734055186.074:490): pid=5104 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:46.088934 kernel: audit: type=1104 audit(1734055186.074:491): pid=5104 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:46.076000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.48:22-10.0.0.1:57296 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:51.074086 systemd[1]: Started sshd@16-10.0.0.48:22-10.0.0.1:43932.service. Dec 13 01:59:51.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.48:22-10.0.0.1:43932 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:51.075253 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 01:59:51.075292 kernel: audit: type=1130 audit(1734055191.073:493): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.48:22-10.0.0.1:43932 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:51.101000 audit[5120]: USER_ACCT pid=5120 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:51.102468 sshd[5120]: Accepted publickey for core from 10.0.0.1 port 43932 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 01:59:51.106005 sshd[5120]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:59:51.104000 audit[5120]: CRED_ACQ pid=5120 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:51.109343 systemd-logind[1291]: New session 17 of user core. Dec 13 01:59:51.110086 systemd[1]: Started session-17.scope. Dec 13 01:59:51.110200 kernel: audit: type=1101 audit(1734055191.101:494): pid=5120 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:51.110232 kernel: audit: type=1103 audit(1734055191.104:495): pid=5120 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:51.110249 kernel: audit: type=1006 audit(1734055191.105:496): pid=5120 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Dec 13 01:59:51.105000 audit[5120]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffef5d0d880 a2=3 a3=0 items=0 ppid=1 pid=5120 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:51.116612 kernel: audit: type=1300 audit(1734055191.105:496): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffef5d0d880 a2=3 a3=0 items=0 ppid=1 pid=5120 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:51.116703 kernel: audit: type=1327 audit(1734055191.105:496): proctitle=737368643A20636F7265205B707269765D Dec 13 01:59:51.105000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 01:59:51.113000 audit[5120]: USER_START pid=5120 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:51.122282 kernel: audit: type=1105 audit(1734055191.113:497): pid=5120 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:51.122322 kernel: audit: type=1103 audit(1734055191.114:498): pid=5123 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:51.114000 audit[5123]: CRED_ACQ pid=5123 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:51.215427 sshd[5120]: pam_unix(sshd:session): session closed for user core Dec 13 01:59:51.215000 audit[5120]: USER_END pid=5120 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:51.217681 systemd[1]: sshd@16-10.0.0.48:22-10.0.0.1:43932.service: Deactivated successfully. Dec 13 01:59:51.218479 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:59:51.215000 audit[5120]: CRED_DISP pid=5120 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:51.223327 systemd-logind[1291]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:59:51.223996 systemd-logind[1291]: Removed session 17. Dec 13 01:59:51.224067 kernel: audit: type=1106 audit(1734055191.215:499): pid=5120 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:51.224097 kernel: audit: type=1104 audit(1734055191.215:500): pid=5120 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:51.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.48:22-10.0.0.1:43932 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:53.766708 kubelet[2213]: E1213 01:59:53.766119 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:59:56.219473 systemd[1]: Started sshd@17-10.0.0.48:22-10.0.0.1:60228.service. Dec 13 01:59:56.221001 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 01:59:56.221085 kernel: audit: type=1130 audit(1734055196.218:502): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.48:22-10.0.0.1:60228 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:56.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.48:22-10.0.0.1:60228 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:56.251000 audit[5136]: USER_ACCT pid=5136 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:56.252923 sshd[5136]: Accepted publickey for core from 10.0.0.1 port 60228 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 01:59:56.255313 sshd[5136]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:59:56.254000 audit[5136]: CRED_ACQ pid=5136 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:56.261103 systemd-logind[1291]: New session 18 of user core. Dec 13 01:59:56.261868 systemd[1]: Started session-18.scope. Dec 13 01:59:56.261988 kernel: audit: type=1101 audit(1734055196.251:503): pid=5136 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:56.262032 kernel: audit: type=1103 audit(1734055196.254:504): pid=5136 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:56.264725 kernel: audit: type=1006 audit(1734055196.254:505): pid=5136 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Dec 13 01:59:56.254000 audit[5136]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffb33e5b40 a2=3 a3=0 items=0 ppid=1 pid=5136 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:56.269533 kernel: audit: type=1300 audit(1734055196.254:505): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffb33e5b40 a2=3 a3=0 items=0 ppid=1 pid=5136 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:56.269628 kernel: audit: type=1327 audit(1734055196.254:505): proctitle=737368643A20636F7265205B707269765D Dec 13 01:59:56.254000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 01:59:56.266000 audit[5136]: USER_START pid=5136 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:56.275610 kernel: audit: type=1105 audit(1734055196.266:506): pid=5136 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:56.275674 kernel: audit: type=1103 audit(1734055196.267:507): pid=5139 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:56.267000 audit[5139]: CRED_ACQ pid=5139 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:56.374589 sshd[5136]: pam_unix(sshd:session): session closed for user core Dec 13 01:59:56.374000 audit[5136]: USER_END pid=5136 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:56.376995 systemd[1]: Started sshd@18-10.0.0.48:22-10.0.0.1:60244.service. Dec 13 01:59:56.377389 systemd[1]: sshd@17-10.0.0.48:22-10.0.0.1:60228.service: Deactivated successfully. Dec 13 01:59:56.377979 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:59:56.374000 audit[5136]: CRED_DISP pid=5136 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:56.380829 systemd-logind[1291]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:59:56.381821 systemd-logind[1291]: Removed session 18. Dec 13 01:59:56.384085 kernel: audit: type=1106 audit(1734055196.374:508): pid=5136 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:56.384163 kernel: audit: type=1104 audit(1734055196.374:509): pid=5136 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:56.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.48:22-10.0.0.1:60244 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:56.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.48:22-10.0.0.1:60228 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:56.408000 audit[5149]: USER_ACCT pid=5149 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:56.409129 sshd[5149]: Accepted publickey for core from 10.0.0.1 port 60244 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 01:59:56.409000 audit[5149]: CRED_ACQ pid=5149 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:56.409000 audit[5149]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc14b940e0 a2=3 a3=0 items=0 ppid=1 pid=5149 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:56.409000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 01:59:56.410233 sshd[5149]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:59:56.413469 systemd-logind[1291]: New session 19 of user core. Dec 13 01:59:56.414318 systemd[1]: Started session-19.scope. Dec 13 01:59:56.417000 audit[5149]: USER_START pid=5149 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:56.418000 audit[5153]: CRED_ACQ pid=5153 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:56.867000 audit[5149]: USER_END pid=5149 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:56.867000 audit[5149]: CRED_DISP pid=5149 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:56.868035 sshd[5149]: pam_unix(sshd:session): session closed for user core Dec 13 01:59:56.870375 systemd[1]: Started sshd@19-10.0.0.48:22-10.0.0.1:60250.service. Dec 13 01:59:56.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.48:22-10.0.0.1:60250 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:56.871873 systemd[1]: sshd@18-10.0.0.48:22-10.0.0.1:60244.service: Deactivated successfully. Dec 13 01:59:56.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.48:22-10.0.0.1:60244 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:56.872889 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:59:56.872901 systemd-logind[1291]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:59:56.874011 systemd-logind[1291]: Removed session 19. Dec 13 01:59:56.899000 audit[5160]: USER_ACCT pid=5160 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:56.901046 sshd[5160]: Accepted publickey for core from 10.0.0.1 port 60250 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 01:59:56.901000 audit[5160]: CRED_ACQ pid=5160 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:56.901000 audit[5160]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc1b007660 a2=3 a3=0 items=0 ppid=1 pid=5160 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:56.901000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 01:59:56.902235 sshd[5160]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:59:56.905576 systemd-logind[1291]: New session 20 of user core. Dec 13 01:59:56.906313 systemd[1]: Started session-20.scope. Dec 13 01:59:56.908000 audit[5160]: USER_START pid=5160 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:56.909000 audit[5165]: CRED_ACQ pid=5165 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:58.583000 audit[5200]: NETFILTER_CFG table=filter:123 family=2 entries=20 op=nft_register_rule pid=5200 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:59:58.583000 audit[5200]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7fff6dc18fa0 a2=0 a3=7fff6dc18f8c items=0 ppid=2382 pid=5200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:58.583000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:59:58.592047 sshd[5160]: pam_unix(sshd:session): session closed for user core Dec 13 01:59:58.591000 audit[5160]: USER_END pid=5160 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:58.591000 audit[5160]: CRED_DISP pid=5160 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:58.591000 audit[5200]: NETFILTER_CFG table=nat:124 family=2 entries=22 op=nft_register_rule pid=5200 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:59:58.591000 audit[5200]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7fff6dc18fa0 a2=0 a3=0 items=0 ppid=2382 pid=5200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:58.591000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:59:58.593000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.48:22-10.0.0.1:60250 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:58.595790 systemd[1]: sshd@19-10.0.0.48:22-10.0.0.1:60250.service: Deactivated successfully. Dec 13 01:59:58.601523 systemd[1]: Started sshd@20-10.0.0.48:22-10.0.0.1:60264.service. Dec 13 01:59:58.602009 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 01:59:58.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.48:22-10.0.0.1:60264 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:58.603084 systemd-logind[1291]: Session 20 logged out. Waiting for processes to exit. Dec 13 01:59:58.608293 systemd-logind[1291]: Removed session 20. Dec 13 01:59:58.624000 audit[5205]: NETFILTER_CFG table=filter:125 family=2 entries=32 op=nft_register_rule pid=5205 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:59:58.624000 audit[5205]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffc210747d0 a2=0 a3=7ffc210747bc items=0 ppid=2382 pid=5205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:58.624000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:59:58.646000 audit[5205]: NETFILTER_CFG table=nat:126 family=2 entries=22 op=nft_register_rule pid=5205 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 01:59:58.646000 audit[5205]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffc210747d0 a2=0 a3=0 items=0 ppid=2382 pid=5205 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:58.646000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 01:59:58.687279 sshd[5203]: Accepted publickey for core from 10.0.0.1 port 60264 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 01:59:58.685000 audit[5203]: USER_ACCT pid=5203 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:58.686000 audit[5203]: CRED_ACQ pid=5203 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:58.686000 audit[5203]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc79364e20 a2=3 a3=0 items=0 ppid=1 pid=5203 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:58.686000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 01:59:58.688898 sshd[5203]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:59:58.706038 systemd-logind[1291]: New session 21 of user core. Dec 13 01:59:58.707203 systemd[1]: Started session-21.scope. Dec 13 01:59:58.719000 audit[5203]: USER_START pid=5203 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:58.721000 audit[5208]: CRED_ACQ pid=5208 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:58.766121 kubelet[2213]: E1213 01:59:58.766049 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:59:58.769141 kubelet[2213]: E1213 01:59:58.768888 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:59:59.354759 sshd[5203]: pam_unix(sshd:session): session closed for user core Dec 13 01:59:59.354000 audit[5203]: USER_END pid=5203 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:59.354000 audit[5203]: CRED_DISP pid=5203 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:59.358307 systemd[1]: Started sshd@21-10.0.0.48:22-10.0.0.1:60268.service. Dec 13 01:59:59.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.48:22-10.0.0.1:60268 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:59.358940 systemd[1]: sshd@20-10.0.0.48:22-10.0.0.1:60264.service: Deactivated successfully. Dec 13 01:59:59.358000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.48:22-10.0.0.1:60264 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:59.361496 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 01:59:59.362357 systemd-logind[1291]: Session 21 logged out. Waiting for processes to exit. Dec 13 01:59:59.363254 systemd-logind[1291]: Removed session 21. Dec 13 01:59:59.392000 audit[5216]: USER_ACCT pid=5216 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:59.394564 sshd[5216]: Accepted publickey for core from 10.0.0.1 port 60268 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 01:59:59.393000 audit[5216]: CRED_ACQ pid=5216 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:59.393000 audit[5216]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffcaeec050 a2=3 a3=0 items=0 ppid=1 pid=5216 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 01:59:59.393000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 01:59:59.395793 sshd[5216]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:59:59.399691 systemd-logind[1291]: New session 22 of user core. Dec 13 01:59:59.400428 systemd[1]: Started session-22.scope. Dec 13 01:59:59.404000 audit[5216]: USER_START pid=5216 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:59.405000 audit[5220]: CRED_ACQ pid=5220 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:59.530879 sshd[5216]: pam_unix(sshd:session): session closed for user core Dec 13 01:59:59.530000 audit[5216]: USER_END pid=5216 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:59.530000 audit[5216]: CRED_DISP pid=5216 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 01:59:59.533509 systemd[1]: sshd@21-10.0.0.48:22-10.0.0.1:60268.service: Deactivated successfully. Dec 13 01:59:59.534327 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 01:59:59.532000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.48:22-10.0.0.1:60268 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 01:59:59.535121 systemd-logind[1291]: Session 22 logged out. Waiting for processes to exit. Dec 13 01:59:59.535808 systemd-logind[1291]: Removed session 22. Dec 13 02:00:04.533802 systemd[1]: Started sshd@22-10.0.0.48:22-10.0.0.1:60272.service. Dec 13 02:00:04.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.48:22-10.0.0.1:60272 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:04.538467 kernel: kauditd_printk_skb: 57 callbacks suppressed Dec 13 02:00:04.538595 kernel: audit: type=1130 audit(1734055204.533:551): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.48:22-10.0.0.1:60272 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:04.566000 audit[5238]: USER_ACCT pid=5238 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 02:00:04.567198 sshd[5238]: Accepted publickey for core from 10.0.0.1 port 60272 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:00:04.569283 sshd[5238]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:00:04.568000 audit[5238]: CRED_ACQ pid=5238 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 02:00:04.573083 systemd-logind[1291]: New session 23 of user core. Dec 13 02:00:04.573996 systemd[1]: Started session-23.scope. Dec 13 02:00:04.577003 kernel: audit: type=1101 audit(1734055204.566:552): pid=5238 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 02:00:04.577093 kernel: audit: type=1103 audit(1734055204.568:553): pid=5238 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 02:00:04.577121 kernel: audit: type=1006 audit(1734055204.568:554): pid=5238 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Dec 13 02:00:04.568000 audit[5238]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffca334ad60 a2=3 a3=0 items=0 ppid=1 pid=5238 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:00:04.584678 kernel: audit: type=1300 audit(1734055204.568:554): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffca334ad60 a2=3 a3=0 items=0 ppid=1 pid=5238 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:00:04.584806 kernel: audit: type=1327 audit(1734055204.568:554): proctitle=737368643A20636F7265205B707269765D Dec 13 02:00:04.568000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 02:00:04.579000 audit[5238]: USER_START pid=5238 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 02:00:04.590378 kernel: audit: type=1105 audit(1734055204.579:555): pid=5238 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 02:00:04.590464 kernel: audit: type=1103 audit(1734055204.580:556): pid=5241 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 02:00:04.580000 audit[5241]: CRED_ACQ pid=5241 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 02:00:04.693373 sshd[5238]: pam_unix(sshd:session): session closed for user core Dec 13 02:00:04.693000 audit[5238]: USER_END pid=5238 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 02:00:04.695830 systemd[1]: sshd@22-10.0.0.48:22-10.0.0.1:60272.service: Deactivated successfully. Dec 13 02:00:04.696728 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 02:00:04.698499 systemd-logind[1291]: Session 23 logged out. Waiting for processes to exit. Dec 13 02:00:04.699517 systemd-logind[1291]: Removed session 23. Dec 13 02:00:04.693000 audit[5238]: CRED_DISP pid=5238 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 02:00:04.704707 kernel: audit: type=1106 audit(1734055204.693:557): pid=5238 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 02:00:04.704754 kernel: audit: type=1104 audit(1734055204.693:558): pid=5238 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 02:00:04.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.48:22-10.0.0.1:60272 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:05.681000 audit[5253]: NETFILTER_CFG table=filter:127 family=2 entries=20 op=nft_register_rule pid=5253 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:00:05.681000 audit[5253]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffc82f80710 a2=0 a3=7ffc82f806fc items=0 ppid=2382 pid=5253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:00:05.681000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:00:05.693000 audit[5253]: NETFILTER_CFG table=nat:128 family=2 entries=106 op=nft_register_chain pid=5253 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 02:00:05.693000 audit[5253]: SYSCALL arch=c000003e syscall=46 success=yes exit=49452 a0=3 a1=7ffc82f80710 a2=0 a3=7ffc82f806fc items=0 ppid=2382 pid=5253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:00:05.693000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 02:00:06.766310 kubelet[2213]: E1213 02:00:06.766269 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:00:09.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.48:22-10.0.0.1:48340 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:09.697020 systemd[1]: Started sshd@23-10.0.0.48:22-10.0.0.1:48340.service. Dec 13 02:00:09.698263 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 13 02:00:09.698378 kernel: audit: type=1130 audit(1734055209.696:562): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.48:22-10.0.0.1:48340 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:09.725000 audit[5257]: USER_ACCT pid=5257 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 02:00:09.726278 sshd[5257]: Accepted publickey for core from 10.0.0.1 port 48340 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:00:09.729000 audit[5257]: CRED_ACQ pid=5257 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 02:00:09.731092 sshd[5257]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:00:09.734910 kernel: audit: type=1101 audit(1734055209.725:563): pid=5257 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 02:00:09.735036 kernel: audit: type=1103 audit(1734055209.729:564): pid=5257 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 02:00:09.735063 kernel: audit: type=1006 audit(1734055209.729:565): pid=5257 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Dec 13 02:00:09.735989 systemd[1]: Started session-24.scope. Dec 13 02:00:09.736267 systemd-logind[1291]: New session 24 of user core. Dec 13 02:00:09.729000 audit[5257]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff572364a0 a2=3 a3=0 items=0 ppid=1 pid=5257 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:00:09.742281 kernel: audit: type=1300 audit(1734055209.729:565): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff572364a0 a2=3 a3=0 items=0 ppid=1 pid=5257 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:00:09.742332 kernel: audit: type=1327 audit(1734055209.729:565): proctitle=737368643A20636F7265205B707269765D Dec 13 02:00:09.729000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 02:00:09.739000 audit[5257]: USER_START pid=5257 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 02:00:09.748210 kernel: audit: type=1105 audit(1734055209.739:566): pid=5257 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 02:00:09.748259 kernel: audit: type=1103 audit(1734055209.741:567): pid=5260 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 02:00:09.741000 audit[5260]: CRED_ACQ pid=5260 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 02:00:09.844754 sshd[5257]: pam_unix(sshd:session): session closed for user core Dec 13 02:00:09.845000 audit[5257]: USER_END pid=5257 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 02:00:09.847289 systemd[1]: sshd@23-10.0.0.48:22-10.0.0.1:48340.service: Deactivated successfully. Dec 13 02:00:09.849066 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 02:00:09.849544 systemd-logind[1291]: Session 24 logged out. Waiting for processes to exit. Dec 13 02:00:09.845000 audit[5257]: CRED_DISP pid=5257 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 02:00:09.851035 systemd-logind[1291]: Removed session 24. Dec 13 02:00:09.853829 kernel: audit: type=1106 audit(1734055209.845:568): pid=5257 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 02:00:09.853902 kernel: audit: type=1104 audit(1734055209.845:569): pid=5257 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 02:00:09.845000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.48:22-10.0.0.1:48340 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:14.849149 systemd[1]: Started sshd@24-10.0.0.48:22-10.0.0.1:48342.service. Dec 13 02:00:14.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.48:22-10.0.0.1:48342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:14.850903 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 02:00:14.850957 kernel: audit: type=1130 audit(1734055214.848:571): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.48:22-10.0.0.1:48342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:14.881000 audit[5292]: USER_ACCT pid=5292 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 02:00:14.882344 sshd[5292]: Accepted publickey for core from 10.0.0.1 port 48342 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:00:14.885280 sshd[5292]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:00:14.884000 audit[5292]: CRED_ACQ pid=5292 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 02:00:14.891309 systemd-logind[1291]: New session 25 of user core. Dec 13 02:00:14.892156 systemd[1]: Started session-25.scope. Dec 13 02:00:14.893476 kernel: audit: type=1101 audit(1734055214.881:572): pid=5292 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 02:00:14.893526 kernel: audit: type=1103 audit(1734055214.884:573): pid=5292 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 02:00:14.893549 kernel: audit: type=1006 audit(1734055214.884:574): pid=5292 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Dec 13 02:00:14.884000 audit[5292]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe14556d10 a2=3 a3=0 items=0 ppid=1 pid=5292 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:00:14.902885 kernel: audit: type=1300 audit(1734055214.884:574): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe14556d10 a2=3 a3=0 items=0 ppid=1 pid=5292 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:00:14.902935 kernel: audit: type=1327 audit(1734055214.884:574): proctitle=737368643A20636F7265205B707269765D Dec 13 02:00:14.884000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 02:00:14.895000 audit[5292]: USER_START pid=5292 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 02:00:14.911251 kernel: audit: type=1105 audit(1734055214.895:575): pid=5292 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 02:00:14.896000 audit[5295]: CRED_ACQ pid=5295 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 02:00:14.916039 kernel: audit: type=1103 audit(1734055214.896:576): pid=5295 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 02:00:15.006630 sshd[5292]: pam_unix(sshd:session): session closed for user core Dec 13 02:00:15.006000 audit[5292]: USER_END pid=5292 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 02:00:15.011413 systemd[1]: sshd@24-10.0.0.48:22-10.0.0.1:48342.service: Deactivated successfully. Dec 13 02:00:15.012195 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 02:00:15.009000 audit[5292]: CRED_DISP pid=5292 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 02:00:15.014616 systemd-logind[1291]: Session 25 logged out. Waiting for processes to exit. Dec 13 02:00:15.015373 systemd-logind[1291]: Removed session 25. Dec 13 02:00:15.015798 kernel: audit: type=1106 audit(1734055215.006:577): pid=5292 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 02:00:15.015853 kernel: audit: type=1104 audit(1734055215.009:578): pid=5292 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 02:00:15.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.48:22-10.0.0.1:48342 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:19.767301 kubelet[2213]: E1213 02:00:19.767241 2213 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 02:00:20.009605 systemd[1]: Started sshd@25-10.0.0.48:22-10.0.0.1:37108.service. Dec 13 02:00:20.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.48:22-10.0.0.1:37108 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:20.010800 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 02:00:20.010900 kernel: audit: type=1130 audit(1734055220.009:580): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.48:22-10.0.0.1:37108 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:00:20.040000 audit[5325]: USER_ACCT pid=5325 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 02:00:20.041379 sshd[5325]: Accepted publickey for core from 10.0.0.1 port 37108 ssh2: RSA SHA256:x3bGe46DV3PhhP3e9zafVi+waO6W4gVuKhz8/ATtw3M Dec 13 02:00:20.043620 sshd[5325]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:00:20.042000 audit[5325]: CRED_ACQ pid=5325 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 02:00:20.047394 systemd-logind[1291]: New session 26 of user core. Dec 13 02:00:20.048341 systemd[1]: Started session-26.scope. Dec 13 02:00:20.049450 kernel: audit: type=1101 audit(1734055220.040:581): pid=5325 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 02:00:20.049530 kernel: audit: type=1103 audit(1734055220.042:582): pid=5325 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 02:00:20.052040 kernel: audit: type=1006 audit(1734055220.042:583): pid=5325 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Dec 13 02:00:20.052271 kernel: audit: type=1300 audit(1734055220.042:583): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdc651efa0 a2=3 a3=0 items=0 ppid=1 pid=5325 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:00:20.042000 audit[5325]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdc651efa0 a2=3 a3=0 items=0 ppid=1 pid=5325 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:00:20.057870 kernel: audit: type=1327 audit(1734055220.042:583): proctitle=737368643A20636F7265205B707269765D Dec 13 02:00:20.042000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 02:00:20.054000 audit[5325]: USER_START pid=5325 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 02:00:20.063164 kernel: audit: type=1105 audit(1734055220.054:584): pid=5325 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 02:00:20.063375 kernel: audit: type=1103 audit(1734055220.055:585): pid=5328 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 02:00:20.055000 audit[5328]: CRED_ACQ pid=5328 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 02:00:20.168575 sshd[5325]: pam_unix(sshd:session): session closed for user core Dec 13 02:00:20.168000 audit[5325]: USER_END pid=5325 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 02:00:20.171073 systemd[1]: sshd@25-10.0.0.48:22-10.0.0.1:37108.service: Deactivated successfully. Dec 13 02:00:20.172053 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 02:00:20.172407 systemd-logind[1291]: Session 26 logged out. Waiting for processes to exit. Dec 13 02:00:20.173226 systemd-logind[1291]: Removed session 26. Dec 13 02:00:20.168000 audit[5325]: CRED_DISP pid=5325 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 02:00:20.179585 kernel: audit: type=1106 audit(1734055220.168:586): pid=5325 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 02:00:20.179640 kernel: audit: type=1104 audit(1734055220.168:587): pid=5325 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 02:00:20.170000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.48:22-10.0.0.1:37108 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'