Dec 13 07:36:27.982347 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Dec 12 23:50:37 -00 2024 Dec 13 07:36:27.982390 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 07:36:27.982409 kernel: BIOS-provided physical RAM map: Dec 13 07:36:27.982419 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 07:36:27.982427 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 07:36:27.982436 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 07:36:27.982454 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Dec 13 07:36:27.982464 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Dec 13 07:36:27.982474 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 07:36:27.982483 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 07:36:27.982497 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 07:36:27.982506 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 07:36:27.982516 kernel: NX (Execute Disable) protection: active Dec 13 07:36:27.982525 kernel: SMBIOS 2.8 present. Dec 13 07:36:27.982537 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Dec 13 07:36:27.982547 kernel: Hypervisor detected: KVM Dec 13 07:36:27.982561 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 07:36:27.982571 kernel: kvm-clock: cpu 0, msr 4419b001, primary cpu clock Dec 13 07:36:27.982581 kernel: kvm-clock: using sched offset of 4805475508 cycles Dec 13 07:36:27.982592 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 07:36:27.982602 kernel: tsc: Detected 2799.998 MHz processor Dec 13 07:36:27.982613 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 07:36:27.982623 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 07:36:27.982633 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Dec 13 07:36:27.982643 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 07:36:27.982658 kernel: Using GB pages for direct mapping Dec 13 07:36:27.982668 kernel: ACPI: Early table checksum verification disabled Dec 13 07:36:27.982678 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Dec 13 07:36:27.982688 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 07:36:27.982698 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 07:36:27.982708 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 07:36:27.982718 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Dec 13 07:36:27.982728 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 07:36:27.982738 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 07:36:27.982752 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 07:36:27.982762 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 07:36:27.982772 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Dec 13 07:36:27.982783 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Dec 13 07:36:27.982792 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Dec 13 07:36:27.982803 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Dec 13 07:36:27.982819 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Dec 13 07:36:27.982834 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Dec 13 07:36:27.982845 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Dec 13 07:36:27.982856 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 07:36:27.982866 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 07:36:27.982877 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Dec 13 07:36:27.982888 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Dec 13 07:36:27.982898 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Dec 13 07:36:27.982913 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Dec 13 07:36:27.982924 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Dec 13 07:36:27.982934 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Dec 13 07:36:27.982945 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Dec 13 07:36:27.982955 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Dec 13 07:36:27.982966 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Dec 13 07:36:27.982977 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Dec 13 07:36:27.982987 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Dec 13 07:36:27.982998 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Dec 13 07:36:27.983008 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Dec 13 07:36:27.983023 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Dec 13 07:36:27.983033 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 07:36:27.983044 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 13 07:36:27.983055 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Dec 13 07:36:27.983066 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Dec 13 07:36:27.983076 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Dec 13 07:36:27.983087 kernel: Zone ranges: Dec 13 07:36:27.983098 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 07:36:27.983119 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Dec 13 07:36:27.983137 kernel: Normal empty Dec 13 07:36:27.983148 kernel: Movable zone start for each node Dec 13 07:36:27.983159 kernel: Early memory node ranges Dec 13 07:36:27.983169 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 07:36:27.983180 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Dec 13 07:36:27.983207 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Dec 13 07:36:27.983220 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 07:36:27.983231 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 07:36:27.983242 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Dec 13 07:36:27.983257 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 07:36:27.983268 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 07:36:27.983279 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 07:36:27.983290 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 07:36:27.983301 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 07:36:27.983311 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 07:36:27.983322 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 07:36:27.983333 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 07:36:27.983344 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 07:36:27.983358 kernel: TSC deadline timer available Dec 13 07:36:27.983369 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Dec 13 07:36:27.983380 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 07:36:27.983391 kernel: Booting paravirtualized kernel on KVM Dec 13 07:36:27.983401 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 07:36:27.983412 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Dec 13 07:36:27.983423 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Dec 13 07:36:27.983434 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Dec 13 07:36:27.983445 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Dec 13 07:36:27.983459 kernel: kvm-guest: stealtime: cpu 0, msr 7da1c0c0 Dec 13 07:36:27.983470 kernel: kvm-guest: PV spinlocks enabled Dec 13 07:36:27.983481 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 07:36:27.983491 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Dec 13 07:36:27.983502 kernel: Policy zone: DMA32 Dec 13 07:36:27.983514 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 07:36:27.983525 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 07:36:27.983536 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 07:36:27.983551 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 07:36:27.983562 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 07:36:27.983573 kernel: Memory: 1903832K/2096616K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47476K init, 4108K bss, 192524K reserved, 0K cma-reserved) Dec 13 07:36:27.983584 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Dec 13 07:36:27.983595 kernel: Kernel/User page tables isolation: enabled Dec 13 07:36:27.983606 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 07:36:27.983616 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 07:36:27.983627 kernel: rcu: Hierarchical RCU implementation. Dec 13 07:36:27.983639 kernel: rcu: RCU event tracing is enabled. Dec 13 07:36:27.983654 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Dec 13 07:36:27.983665 kernel: Rude variant of Tasks RCU enabled. Dec 13 07:36:27.983676 kernel: Tracing variant of Tasks RCU enabled. Dec 13 07:36:27.983687 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 07:36:27.983698 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Dec 13 07:36:27.983708 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Dec 13 07:36:27.983719 kernel: random: crng init done Dec 13 07:36:27.983744 kernel: Console: colour VGA+ 80x25 Dec 13 07:36:27.983755 kernel: printk: console [tty0] enabled Dec 13 07:36:27.983766 kernel: printk: console [ttyS0] enabled Dec 13 07:36:27.983777 kernel: ACPI: Core revision 20210730 Dec 13 07:36:27.983789 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 07:36:27.983804 kernel: x2apic enabled Dec 13 07:36:27.983815 kernel: Switched APIC routing to physical x2apic. Dec 13 07:36:27.983827 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Dec 13 07:36:27.983838 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Dec 13 07:36:27.983850 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 07:36:27.983865 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 13 07:36:27.983877 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 13 07:36:27.983888 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 07:36:27.983899 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 07:36:27.983910 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 07:36:27.983922 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 07:36:27.983933 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 13 07:36:27.983944 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 07:36:27.983955 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Dec 13 07:36:27.983966 kernel: MDS: Mitigation: Clear CPU buffers Dec 13 07:36:27.983977 kernel: MMIO Stale Data: Unknown: No mitigations Dec 13 07:36:27.983993 kernel: SRBDS: Unknown: Dependent on hypervisor status Dec 13 07:36:27.984004 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 07:36:27.984016 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 07:36:27.984027 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 07:36:27.984038 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 07:36:27.984049 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 07:36:27.984060 kernel: Freeing SMP alternatives memory: 32K Dec 13 07:36:27.984071 kernel: pid_max: default: 32768 minimum: 301 Dec 13 07:36:27.984082 kernel: LSM: Security Framework initializing Dec 13 07:36:27.984093 kernel: SELinux: Initializing. Dec 13 07:36:27.984105 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 07:36:27.984132 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 07:36:27.984144 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Dec 13 07:36:27.984155 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Dec 13 07:36:27.984166 kernel: signal: max sigframe size: 1776 Dec 13 07:36:27.984178 kernel: rcu: Hierarchical SRCU implementation. Dec 13 07:36:27.984189 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 07:36:27.984271 kernel: smp: Bringing up secondary CPUs ... Dec 13 07:36:27.984283 kernel: x86: Booting SMP configuration: Dec 13 07:36:27.984294 kernel: .... node #0, CPUs: #1 Dec 13 07:36:27.984311 kernel: kvm-clock: cpu 1, msr 4419b041, secondary cpu clock Dec 13 07:36:27.984323 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Dec 13 07:36:27.984334 kernel: kvm-guest: stealtime: cpu 1, msr 7da5c0c0 Dec 13 07:36:27.984346 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 07:36:27.984357 kernel: smpboot: Max logical packages: 16 Dec 13 07:36:27.984368 kernel: smpboot: Total of 2 processors activated (11199.99 BogoMIPS) Dec 13 07:36:27.984380 kernel: devtmpfs: initialized Dec 13 07:36:27.984391 kernel: x86/mm: Memory block size: 128MB Dec 13 07:36:27.984402 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 07:36:27.984414 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Dec 13 07:36:27.984429 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 07:36:27.984441 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 07:36:27.984452 kernel: audit: initializing netlink subsys (disabled) Dec 13 07:36:27.984463 kernel: audit: type=2000 audit(1734075387.111:1): state=initialized audit_enabled=0 res=1 Dec 13 07:36:27.984474 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 07:36:27.984486 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 07:36:27.984497 kernel: cpuidle: using governor menu Dec 13 07:36:27.984508 kernel: ACPI: bus type PCI registered Dec 13 07:36:27.984519 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 07:36:27.984535 kernel: dca service started, version 1.12.1 Dec 13 07:36:27.984546 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 07:36:27.984557 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved in E820 Dec 13 07:36:27.984569 kernel: PCI: Using configuration type 1 for base access Dec 13 07:36:27.984580 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 07:36:27.984591 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 07:36:27.984603 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 07:36:27.984614 kernel: ACPI: Added _OSI(Module Device) Dec 13 07:36:27.984629 kernel: ACPI: Added _OSI(Processor Device) Dec 13 07:36:27.984641 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 07:36:27.984652 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 07:36:27.984663 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 07:36:27.984675 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 07:36:27.984686 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 07:36:27.984697 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 07:36:27.984708 kernel: ACPI: Interpreter enabled Dec 13 07:36:27.984720 kernel: ACPI: PM: (supports S0 S5) Dec 13 07:36:27.984731 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 07:36:27.984747 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 07:36:27.984759 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 07:36:27.984770 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 07:36:27.985043 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 07:36:27.985247 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 07:36:27.985400 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 07:36:27.985418 kernel: PCI host bridge to bus 0000:00 Dec 13 07:36:27.985589 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 07:36:27.985724 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 07:36:27.985857 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 07:36:27.986004 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Dec 13 07:36:27.986159 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 07:36:27.986311 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Dec 13 07:36:27.986446 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 07:36:27.986645 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 07:36:27.986819 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Dec 13 07:36:27.986980 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Dec 13 07:36:27.987155 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Dec 13 07:36:27.987328 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Dec 13 07:36:27.987478 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 07:36:27.987660 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Dec 13 07:36:27.987827 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Dec 13 07:36:27.988007 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Dec 13 07:36:27.988184 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Dec 13 07:36:27.988359 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Dec 13 07:36:27.988507 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Dec 13 07:36:27.988674 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Dec 13 07:36:27.988834 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Dec 13 07:36:27.989002 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Dec 13 07:36:27.989172 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Dec 13 07:36:27.989354 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Dec 13 07:36:27.989516 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Dec 13 07:36:27.989695 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Dec 13 07:36:27.989848 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Dec 13 07:36:27.990007 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Dec 13 07:36:27.990178 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Dec 13 07:36:27.990410 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Dec 13 07:36:27.990558 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Dec 13 07:36:27.990703 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Dec 13 07:36:27.990855 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Dec 13 07:36:27.991000 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Dec 13 07:36:27.991171 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Dec 13 07:36:27.991333 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 07:36:27.991479 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Dec 13 07:36:27.991624 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Dec 13 07:36:27.991780 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 07:36:27.991933 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 07:36:27.992088 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 07:36:27.992275 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Dec 13 07:36:27.992422 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Dec 13 07:36:27.992575 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 07:36:27.992721 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 13 07:36:27.992889 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Dec 13 07:36:27.993042 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Dec 13 07:36:27.993218 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 13 07:36:27.993367 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 13 07:36:27.993513 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 07:36:27.993697 kernel: pci_bus 0000:02: extended config space not accessible Dec 13 07:36:27.993881 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Dec 13 07:36:27.994041 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Dec 13 07:36:27.994262 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 13 07:36:27.994418 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 07:36:27.994578 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Dec 13 07:36:27.994730 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Dec 13 07:36:27.994878 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 13 07:36:27.995031 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 07:36:27.995201 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 07:36:27.995370 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Dec 13 07:36:27.995525 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Dec 13 07:36:27.995676 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 13 07:36:27.995819 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 07:36:27.995964 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 07:36:27.996120 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 13 07:36:27.996306 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 07:36:27.996453 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 07:36:27.996601 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 13 07:36:27.996745 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 07:36:27.996890 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 07:36:27.997036 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 13 07:36:27.997206 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 07:36:27.997355 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 07:36:27.997510 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 13 07:36:27.997656 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 07:36:27.997801 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 07:36:27.997961 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 13 07:36:27.998122 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 07:36:27.998286 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 07:36:27.998304 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 07:36:27.998317 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 07:36:27.998336 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 07:36:27.998348 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 07:36:27.998360 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 07:36:27.998372 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 07:36:27.998384 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 07:36:27.998395 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 07:36:27.998407 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 07:36:27.998419 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 07:36:27.998430 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 07:36:27.998454 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 07:36:27.998466 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 07:36:27.998478 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 07:36:27.998489 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 07:36:27.998501 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 07:36:27.998512 kernel: iommu: Default domain type: Translated Dec 13 07:36:27.998524 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 07:36:27.998687 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 07:36:27.998843 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 07:36:27.998995 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 07:36:27.999022 kernel: vgaarb: loaded Dec 13 07:36:27.999035 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 07:36:27.999047 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 07:36:27.999058 kernel: PTP clock support registered Dec 13 07:36:27.999070 kernel: PCI: Using ACPI for IRQ routing Dec 13 07:36:27.999090 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 07:36:27.999101 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 07:36:27.999131 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Dec 13 07:36:27.999152 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 07:36:27.999164 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 07:36:27.999176 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 07:36:27.999188 kernel: pnp: PnP ACPI init Dec 13 07:36:27.999386 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 07:36:27.999407 kernel: pnp: PnP ACPI: found 5 devices Dec 13 07:36:27.999419 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 07:36:27.999437 kernel: NET: Registered PF_INET protocol family Dec 13 07:36:27.999449 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 07:36:27.999461 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 07:36:27.999473 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 07:36:27.999485 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 07:36:27.999497 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Dec 13 07:36:27.999509 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 07:36:27.999520 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 07:36:27.999532 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 07:36:27.999550 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 07:36:27.999571 kernel: NET: Registered PF_XDP protocol family Dec 13 07:36:27.999736 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Dec 13 07:36:27.999918 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 07:36:28.000084 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 07:36:28.000348 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Dec 13 07:36:28.000499 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 13 07:36:28.000651 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 13 07:36:28.000796 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 13 07:36:28.000941 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 13 07:36:28.001086 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Dec 13 07:36:28.001258 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Dec 13 07:36:28.001405 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Dec 13 07:36:28.001557 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Dec 13 07:36:28.001715 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Dec 13 07:36:28.001877 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Dec 13 07:36:28.002026 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Dec 13 07:36:28.002212 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Dec 13 07:36:28.002373 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 13 07:36:28.002524 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 07:36:28.002669 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 13 07:36:28.002814 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Dec 13 07:36:28.002966 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 13 07:36:28.003132 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 07:36:28.003316 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 13 07:36:28.003465 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Dec 13 07:36:28.003612 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 07:36:28.003756 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 07:36:28.003903 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 13 07:36:28.004050 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Dec 13 07:36:28.004230 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 07:36:28.004379 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 07:36:28.004526 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 13 07:36:28.004673 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Dec 13 07:36:28.004820 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 07:36:28.004987 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 07:36:28.005147 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 13 07:36:28.005317 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Dec 13 07:36:28.005464 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 07:36:28.005610 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 07:36:28.005760 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 13 07:36:28.005915 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Dec 13 07:36:28.006081 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 07:36:28.006268 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 07:36:28.006431 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 13 07:36:28.006610 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Dec 13 07:36:28.006771 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 07:36:28.006957 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 07:36:28.007139 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 13 07:36:28.007375 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Dec 13 07:36:28.007529 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 07:36:28.007675 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 07:36:28.007818 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 07:36:28.007952 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 07:36:28.008084 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 07:36:28.008245 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Dec 13 07:36:28.008381 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 07:36:28.008514 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Dec 13 07:36:28.008678 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Dec 13 07:36:28.008819 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Dec 13 07:36:28.008969 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 07:36:28.009136 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Dec 13 07:36:28.009333 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Dec 13 07:36:28.009475 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Dec 13 07:36:28.009615 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 07:36:28.009775 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Dec 13 07:36:28.009917 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Dec 13 07:36:28.010056 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 07:36:28.010233 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Dec 13 07:36:28.010377 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Dec 13 07:36:28.010517 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 07:36:28.010667 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Dec 13 07:36:28.010815 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Dec 13 07:36:28.010956 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 07:36:28.011107 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Dec 13 07:36:28.011292 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Dec 13 07:36:28.011435 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 07:36:28.011584 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Dec 13 07:36:28.011733 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Dec 13 07:36:28.011876 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 07:36:28.012043 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Dec 13 07:36:28.012245 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Dec 13 07:36:28.012391 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 07:36:28.012411 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 07:36:28.012424 kernel: PCI: CLS 0 bytes, default 64 Dec 13 07:36:28.012436 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 07:36:28.012456 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Dec 13 07:36:28.012469 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 07:36:28.012482 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Dec 13 07:36:28.012495 kernel: Initialise system trusted keyrings Dec 13 07:36:28.012507 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 07:36:28.012519 kernel: Key type asymmetric registered Dec 13 07:36:28.012531 kernel: Asymmetric key parser 'x509' registered Dec 13 07:36:28.012543 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 07:36:28.012555 kernel: io scheduler mq-deadline registered Dec 13 07:36:28.012572 kernel: io scheduler kyber registered Dec 13 07:36:28.012584 kernel: io scheduler bfq registered Dec 13 07:36:28.012729 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Dec 13 07:36:28.012877 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Dec 13 07:36:28.013024 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 07:36:28.013184 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Dec 13 07:36:28.013380 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Dec 13 07:36:28.013534 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 07:36:28.013682 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Dec 13 07:36:28.013828 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Dec 13 07:36:28.013973 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 07:36:28.014132 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Dec 13 07:36:28.014303 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Dec 13 07:36:28.014461 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 07:36:28.014618 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Dec 13 07:36:28.014763 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Dec 13 07:36:28.014907 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 07:36:28.015062 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Dec 13 07:36:28.015240 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Dec 13 07:36:28.015395 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 07:36:28.015550 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Dec 13 07:36:28.015712 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Dec 13 07:36:28.015858 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 07:36:28.016005 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Dec 13 07:36:28.016180 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Dec 13 07:36:28.016357 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 07:36:28.016377 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 07:36:28.016391 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 07:36:28.016404 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 07:36:28.016416 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 07:36:28.016429 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 07:36:28.016441 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 07:36:28.016454 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 07:36:28.016472 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 07:36:28.016647 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 13 07:36:28.016668 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 07:36:28.016803 kernel: rtc_cmos 00:03: registered as rtc0 Dec 13 07:36:28.016941 kernel: rtc_cmos 00:03: setting system clock to 2024-12-13T07:36:27 UTC (1734075387) Dec 13 07:36:28.017079 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Dec 13 07:36:28.017097 kernel: intel_pstate: CPU model not supported Dec 13 07:36:28.017127 kernel: NET: Registered PF_INET6 protocol family Dec 13 07:36:28.017140 kernel: Segment Routing with IPv6 Dec 13 07:36:28.017152 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 07:36:28.017165 kernel: NET: Registered PF_PACKET protocol family Dec 13 07:36:28.017177 kernel: Key type dns_resolver registered Dec 13 07:36:28.017203 kernel: IPI shorthand broadcast: enabled Dec 13 07:36:28.017218 kernel: sched_clock: Marking stable (949278919, 208920903)->(1426043438, -267843616) Dec 13 07:36:28.017231 kernel: registered taskstats version 1 Dec 13 07:36:28.017243 kernel: Loading compiled-in X.509 certificates Dec 13 07:36:28.017255 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: d9defb0205602bee9bb670636cbe5c74194fdb5e' Dec 13 07:36:28.017274 kernel: Key type .fscrypt registered Dec 13 07:36:28.017286 kernel: Key type fscrypt-provisioning registered Dec 13 07:36:28.017298 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 07:36:28.017310 kernel: ima: Allocated hash algorithm: sha1 Dec 13 07:36:28.017323 kernel: ima: No architecture policies found Dec 13 07:36:28.017335 kernel: clk: Disabling unused clocks Dec 13 07:36:28.017347 kernel: Freeing unused kernel image (initmem) memory: 47476K Dec 13 07:36:28.017359 kernel: Write protecting the kernel read-only data: 28672k Dec 13 07:36:28.017375 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 07:36:28.017388 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 07:36:28.017400 kernel: Run /init as init process Dec 13 07:36:28.017412 kernel: with arguments: Dec 13 07:36:28.017425 kernel: /init Dec 13 07:36:28.017437 kernel: with environment: Dec 13 07:36:28.017449 kernel: HOME=/ Dec 13 07:36:28.017460 kernel: TERM=linux Dec 13 07:36:28.017472 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 07:36:28.017505 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 07:36:28.017528 systemd[1]: Detected virtualization kvm. Dec 13 07:36:28.017541 systemd[1]: Detected architecture x86-64. Dec 13 07:36:28.017553 systemd[1]: Running in initrd. Dec 13 07:36:28.017576 systemd[1]: No hostname configured, using default hostname. Dec 13 07:36:28.017589 systemd[1]: Hostname set to . Dec 13 07:36:28.017602 systemd[1]: Initializing machine ID from VM UUID. Dec 13 07:36:28.017619 systemd[1]: Queued start job for default target initrd.target. Dec 13 07:36:28.017638 systemd[1]: Started systemd-ask-password-console.path. Dec 13 07:36:28.017651 systemd[1]: Reached target cryptsetup.target. Dec 13 07:36:28.017664 systemd[1]: Reached target paths.target. Dec 13 07:36:28.017677 systemd[1]: Reached target slices.target. Dec 13 07:36:28.017704 systemd[1]: Reached target swap.target. Dec 13 07:36:28.017716 systemd[1]: Reached target timers.target. Dec 13 07:36:28.017730 systemd[1]: Listening on iscsid.socket. Dec 13 07:36:28.017748 systemd[1]: Listening on iscsiuio.socket. Dec 13 07:36:28.017761 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 07:36:28.017778 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 07:36:28.017792 systemd[1]: Listening on systemd-journald.socket. Dec 13 07:36:28.017805 systemd[1]: Listening on systemd-networkd.socket. Dec 13 07:36:28.017818 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 07:36:28.017830 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 07:36:28.017844 systemd[1]: Reached target sockets.target. Dec 13 07:36:28.017856 systemd[1]: Starting kmod-static-nodes.service... Dec 13 07:36:28.017874 systemd[1]: Finished network-cleanup.service. Dec 13 07:36:28.017887 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 07:36:28.017900 systemd[1]: Starting systemd-journald.service... Dec 13 07:36:28.017913 systemd[1]: Starting systemd-modules-load.service... Dec 13 07:36:28.017926 systemd[1]: Starting systemd-resolved.service... Dec 13 07:36:28.017939 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 07:36:28.017952 systemd[1]: Finished kmod-static-nodes.service. Dec 13 07:36:28.017965 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 07:36:28.017990 systemd-journald[202]: Journal started Dec 13 07:36:28.018075 systemd-journald[202]: Runtime Journal (/run/log/journal/f37f56e85fea4575aeccae7d3c049781) is 4.7M, max 38.1M, 33.3M free. Dec 13 07:36:27.974094 systemd-resolved[204]: Positive Trust Anchors: Dec 13 07:36:28.071598 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 07:36:28.071639 kernel: Bridge firewalling registered Dec 13 07:36:28.071657 systemd[1]: Started systemd-resolved.service. Dec 13 07:36:28.071679 kernel: audit: type=1130 audit(1734075388.055:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:28.071697 kernel: SCSI subsystem initialized Dec 13 07:36:28.071712 systemd[1]: Started systemd-journald.service. Dec 13 07:36:28.071729 kernel: audit: type=1130 audit(1734075388.064:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:28.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:28.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:27.974124 systemd-resolved[204]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 07:36:27.974169 systemd-resolved[204]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 07:36:27.974267 systemd-modules-load[203]: Inserted module 'overlay' Dec 13 07:36:27.978039 systemd-resolved[204]: Defaulting to hostname 'linux'. Dec 13 07:36:28.037765 systemd-modules-load[203]: Inserted module 'br_netfilter' Dec 13 07:36:28.087743 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 07:36:28.087778 kernel: audit: type=1130 audit(1734075388.078:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:28.087796 kernel: device-mapper: uevent: version 1.0.3 Dec 13 07:36:28.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:28.079460 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 07:36:28.096801 kernel: audit: type=1130 audit(1734075388.079:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:28.096833 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 07:36:28.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:28.080296 systemd[1]: Reached target nss-lookup.target. Dec 13 07:36:28.098345 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 07:36:28.098674 systemd-modules-load[203]: Inserted module 'dm_multipath' Dec 13 07:36:28.100584 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 07:36:28.108034 systemd[1]: Finished systemd-modules-load.service. Dec 13 07:36:28.126385 kernel: audit: type=1130 audit(1734075388.110:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:28.126430 kernel: audit: type=1130 audit(1734075388.116:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:28.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:28.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:28.111452 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 07:36:28.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:28.134279 kernel: audit: type=1130 audit(1734075388.126:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:28.118018 systemd[1]: Starting systemd-sysctl.service... Dec 13 07:36:28.118930 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 07:36:28.134985 systemd[1]: Starting dracut-cmdline.service... Dec 13 07:36:28.138910 systemd[1]: Finished systemd-sysctl.service. Dec 13 07:36:28.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:28.158324 kernel: audit: type=1130 audit(1734075388.139:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:28.166103 dracut-cmdline[223]: dracut-dracut-053 Dec 13 07:36:28.169083 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 07:36:28.250242 kernel: Loading iSCSI transport class v2.0-870. Dec 13 07:36:28.271241 kernel: iscsi: registered transport (tcp) Dec 13 07:36:28.299628 kernel: iscsi: registered transport (qla4xxx) Dec 13 07:36:28.299745 kernel: QLogic iSCSI HBA Driver Dec 13 07:36:28.346396 systemd[1]: Finished dracut-cmdline.service. Dec 13 07:36:28.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:28.354233 kernel: audit: type=1130 audit(1734075388.346:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:28.352990 systemd[1]: Starting dracut-pre-udev.service... Dec 13 07:36:28.411298 kernel: raid6: sse2x4 gen() 7901 MB/s Dec 13 07:36:28.429272 kernel: raid6: sse2x4 xor() 5194 MB/s Dec 13 07:36:28.447262 kernel: raid6: sse2x2 gen() 5800 MB/s Dec 13 07:36:28.465253 kernel: raid6: sse2x2 xor() 8340 MB/s Dec 13 07:36:28.483254 kernel: raid6: sse2x1 gen() 5623 MB/s Dec 13 07:36:28.501849 kernel: raid6: sse2x1 xor() 7605 MB/s Dec 13 07:36:28.501986 kernel: raid6: using algorithm sse2x4 gen() 7901 MB/s Dec 13 07:36:28.502004 kernel: raid6: .... xor() 5194 MB/s, rmw enabled Dec 13 07:36:28.503074 kernel: raid6: using ssse3x2 recovery algorithm Dec 13 07:36:28.520253 kernel: xor: automatically using best checksumming function avx Dec 13 07:36:28.634270 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 07:36:28.647353 systemd[1]: Finished dracut-pre-udev.service. Dec 13 07:36:28.647000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:28.648000 audit: BPF prog-id=7 op=LOAD Dec 13 07:36:28.648000 audit: BPF prog-id=8 op=LOAD Dec 13 07:36:28.649419 systemd[1]: Starting systemd-udevd.service... Dec 13 07:36:28.667806 systemd-udevd[403]: Using default interface naming scheme 'v252'. Dec 13 07:36:28.676755 systemd[1]: Started systemd-udevd.service. Dec 13 07:36:28.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:28.679518 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 07:36:28.696186 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Dec 13 07:36:28.738765 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 07:36:28.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:28.740819 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 07:36:28.829961 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 07:36:28.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:28.928754 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Dec 13 07:36:28.966902 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 07:36:28.966932 kernel: ACPI: bus type USB registered Dec 13 07:36:28.966949 kernel: usbcore: registered new interface driver usbfs Dec 13 07:36:28.966976 kernel: usbcore: registered new interface driver hub Dec 13 07:36:28.966992 kernel: AVX version of gcm_enc/dec engaged. Dec 13 07:36:28.967007 kernel: usbcore: registered new device driver usb Dec 13 07:36:28.967023 kernel: AES CTR mode by8 optimization enabled Dec 13 07:36:28.967038 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 07:36:28.967054 kernel: GPT:17805311 != 125829119 Dec 13 07:36:28.967069 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 07:36:28.967084 kernel: GPT:17805311 != 125829119 Dec 13 07:36:28.967112 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 07:36:28.967134 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 07:36:29.002763 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 07:36:29.125860 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 13 07:36:29.126245 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Dec 13 07:36:29.126423 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 13 07:36:29.126593 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 13 07:36:29.126776 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (453) Dec 13 07:36:29.126796 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Dec 13 07:36:29.126962 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Dec 13 07:36:29.127143 kernel: hub 1-0:1.0: USB hub found Dec 13 07:36:29.127375 kernel: hub 1-0:1.0: 4 ports detected Dec 13 07:36:29.127571 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 13 07:36:29.127852 kernel: hub 2-0:1.0: USB hub found Dec 13 07:36:29.128067 kernel: hub 2-0:1.0: 4 ports detected Dec 13 07:36:29.128282 kernel: libata version 3.00 loaded. Dec 13 07:36:29.128302 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 07:36:29.128472 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 07:36:29.128493 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 07:36:29.128657 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 07:36:29.128828 kernel: scsi host0: ahci Dec 13 07:36:29.129052 kernel: scsi host1: ahci Dec 13 07:36:29.129266 kernel: scsi host2: ahci Dec 13 07:36:29.129454 kernel: scsi host3: ahci Dec 13 07:36:29.129645 kernel: scsi host4: ahci Dec 13 07:36:29.129823 kernel: scsi host5: ahci Dec 13 07:36:29.130012 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Dec 13 07:36:29.130038 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Dec 13 07:36:29.130055 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Dec 13 07:36:29.130071 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Dec 13 07:36:29.130098 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Dec 13 07:36:29.130117 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Dec 13 07:36:29.029046 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 07:36:29.126461 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 07:36:29.131540 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 07:36:29.144476 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 07:36:29.147390 systemd[1]: Starting disk-uuid.service... Dec 13 07:36:29.158221 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 07:36:29.161387 disk-uuid[529]: Primary Header is updated. Dec 13 07:36:29.161387 disk-uuid[529]: Secondary Entries is updated. Dec 13 07:36:29.161387 disk-uuid[529]: Secondary Header is updated. Dec 13 07:36:29.258242 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 13 07:36:29.390253 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 07:36:29.393314 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 07:36:29.393366 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 13 07:36:29.398042 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 07:36:29.398112 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 07:36:29.398132 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 07:36:29.412224 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 07:36:29.419378 kernel: usbcore: registered new interface driver usbhid Dec 13 07:36:29.419441 kernel: usbhid: USB HID core driver Dec 13 07:36:29.426224 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Dec 13 07:36:29.426333 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Dec 13 07:36:30.171498 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 07:36:30.171584 disk-uuid[530]: The operation has completed successfully. Dec 13 07:36:30.233015 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 07:36:30.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:30.233000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:30.233236 systemd[1]: Finished disk-uuid.service. Dec 13 07:36:30.235361 systemd[1]: Starting verity-setup.service... Dec 13 07:36:30.258513 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Dec 13 07:36:30.310959 systemd[1]: Found device dev-mapper-usr.device. Dec 13 07:36:30.314334 systemd[1]: Mounting sysusr-usr.mount... Dec 13 07:36:30.317846 systemd[1]: Finished verity-setup.service. Dec 13 07:36:30.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:30.406230 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 07:36:30.406916 systemd[1]: Mounted sysusr-usr.mount. Dec 13 07:36:30.407820 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 07:36:30.409142 systemd[1]: Starting ignition-setup.service... Dec 13 07:36:30.410644 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 07:36:30.427566 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 07:36:30.427660 kernel: BTRFS info (device vda6): using free space tree Dec 13 07:36:30.427679 kernel: BTRFS info (device vda6): has skinny extents Dec 13 07:36:30.445443 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 07:36:30.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:30.454295 systemd[1]: Finished ignition-setup.service. Dec 13 07:36:30.457095 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 07:36:30.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:30.540000 audit: BPF prog-id=9 op=LOAD Dec 13 07:36:30.538945 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 07:36:30.542074 systemd[1]: Starting systemd-networkd.service... Dec 13 07:36:30.581810 systemd-networkd[710]: lo: Link UP Dec 13 07:36:30.582871 systemd-networkd[710]: lo: Gained carrier Dec 13 07:36:30.585009 systemd-networkd[710]: Enumeration completed Dec 13 07:36:30.585888 systemd[1]: Started systemd-networkd.service. Dec 13 07:36:30.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:30.586691 systemd[1]: Reached target network.target. Dec 13 07:36:30.587304 systemd-networkd[710]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 07:36:30.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:30.589035 systemd[1]: Starting iscsiuio.service... Dec 13 07:36:30.592119 systemd-networkd[710]: eth0: Link UP Dec 13 07:36:30.592125 systemd-networkd[710]: eth0: Gained carrier Dec 13 07:36:30.608234 systemd[1]: Started iscsiuio.service. Dec 13 07:36:30.610329 systemd[1]: Starting iscsid.service... Dec 13 07:36:30.617027 iscsid[715]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 07:36:30.617027 iscsid[715]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 07:36:30.617027 iscsid[715]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 07:36:30.617027 iscsid[715]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 07:36:30.617027 iscsid[715]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 07:36:30.617027 iscsid[715]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 07:36:30.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:30.617388 systemd[1]: Started iscsid.service. Dec 13 07:36:30.619453 systemd[1]: Starting dracut-initqueue.service... Dec 13 07:36:30.627401 systemd-networkd[710]: eth0: DHCPv4 address 10.230.78.246/30, gateway 10.230.78.245 acquired from 10.230.78.245 Dec 13 07:36:30.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:30.639442 systemd[1]: Finished dracut-initqueue.service. Dec 13 07:36:30.640336 systemd[1]: Reached target remote-fs-pre.target. Dec 13 07:36:30.640905 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 07:36:30.641500 systemd[1]: Reached target remote-fs.target. Dec 13 07:36:30.647094 systemd[1]: Starting dracut-pre-mount.service... Dec 13 07:36:30.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:30.659951 systemd[1]: Finished dracut-pre-mount.service. Dec 13 07:36:30.673817 ignition[642]: Ignition 2.14.0 Dec 13 07:36:30.675010 ignition[642]: Stage: fetch-offline Dec 13 07:36:30.675815 ignition[642]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 07:36:30.676730 ignition[642]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 07:36:30.678030 ignition[642]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 07:36:30.678297 ignition[642]: parsed url from cmdline: "" Dec 13 07:36:30.678305 ignition[642]: no config URL provided Dec 13 07:36:30.678316 ignition[642]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 07:36:30.678339 ignition[642]: no config at "/usr/lib/ignition/user.ign" Dec 13 07:36:30.680069 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 07:36:30.680000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:30.678361 ignition[642]: failed to fetch config: resource requires networking Dec 13 07:36:30.682353 systemd[1]: Starting ignition-fetch.service... Dec 13 07:36:30.678520 ignition[642]: Ignition finished successfully Dec 13 07:36:30.695316 ignition[730]: Ignition 2.14.0 Dec 13 07:36:30.695337 ignition[730]: Stage: fetch Dec 13 07:36:30.695521 ignition[730]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 07:36:30.695559 ignition[730]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 07:36:30.697101 ignition[730]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 07:36:30.697265 ignition[730]: parsed url from cmdline: "" Dec 13 07:36:30.697272 ignition[730]: no config URL provided Dec 13 07:36:30.697281 ignition[730]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 07:36:30.697296 ignition[730]: no config at "/usr/lib/ignition/user.ign" Dec 13 07:36:30.703809 ignition[730]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Dec 13 07:36:30.703847 ignition[730]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Dec 13 07:36:30.705872 ignition[730]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Dec 13 07:36:30.725225 ignition[730]: GET result: OK Dec 13 07:36:30.725441 ignition[730]: parsing config with SHA512: 54073cff64bcbfe030fb2a9a1948966d3f6e815f0c7a1394731184c6755504a27e07569e5a775125a6ef174eec7d16a98fb16ed38b03dad38ac71c9f057ef6d4 Dec 13 07:36:30.736600 unknown[730]: fetched base config from "system" Dec 13 07:36:30.737545 unknown[730]: fetched base config from "system" Dec 13 07:36:30.738284 unknown[730]: fetched user config from "openstack" Dec 13 07:36:30.739632 ignition[730]: fetch: fetch complete Dec 13 07:36:30.740315 ignition[730]: fetch: fetch passed Dec 13 07:36:30.741039 ignition[730]: Ignition finished successfully Dec 13 07:36:30.743579 systemd[1]: Finished ignition-fetch.service. Dec 13 07:36:30.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:30.745514 systemd[1]: Starting ignition-kargs.service... Dec 13 07:36:30.758008 ignition[736]: Ignition 2.14.0 Dec 13 07:36:30.758031 ignition[736]: Stage: kargs Dec 13 07:36:30.758223 ignition[736]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 07:36:30.758261 ignition[736]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 07:36:30.760063 ignition[736]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 07:36:30.761598 ignition[736]: kargs: kargs passed Dec 13 07:36:30.761680 ignition[736]: Ignition finished successfully Dec 13 07:36:30.763016 systemd[1]: Finished ignition-kargs.service. Dec 13 07:36:30.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:30.766811 systemd[1]: Starting ignition-disks.service... Dec 13 07:36:30.777889 ignition[742]: Ignition 2.14.0 Dec 13 07:36:30.779098 ignition[742]: Stage: disks Dec 13 07:36:30.779940 ignition[742]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 07:36:30.780879 ignition[742]: parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 07:36:30.782434 ignition[742]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 07:36:30.784194 ignition[742]: disks: disks passed Dec 13 07:36:30.784307 ignition[742]: Ignition finished successfully Dec 13 07:36:30.785448 systemd[1]: Finished ignition-disks.service. Dec 13 07:36:30.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:30.786417 systemd[1]: Reached target initrd-root-device.target. Dec 13 07:36:30.787393 systemd[1]: Reached target local-fs-pre.target. Dec 13 07:36:30.788595 systemd[1]: Reached target local-fs.target. Dec 13 07:36:30.789778 systemd[1]: Reached target sysinit.target. Dec 13 07:36:30.790917 systemd[1]: Reached target basic.target. Dec 13 07:36:30.793458 systemd[1]: Starting systemd-fsck-root.service... Dec 13 07:36:30.813276 systemd-fsck[749]: ROOT: clean, 621/1628000 files, 124058/1617920 blocks Dec 13 07:36:30.819360 systemd[1]: Finished systemd-fsck-root.service. Dec 13 07:36:30.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:30.823994 systemd[1]: Mounting sysroot.mount... Dec 13 07:36:30.840233 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 07:36:30.841002 systemd[1]: Mounted sysroot.mount. Dec 13 07:36:30.841761 systemd[1]: Reached target initrd-root-fs.target. Dec 13 07:36:30.844443 systemd[1]: Mounting sysroot-usr.mount... Dec 13 07:36:30.845618 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 07:36:30.846579 systemd[1]: Starting flatcar-openstack-hostname.service... Dec 13 07:36:30.847376 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 07:36:30.847437 systemd[1]: Reached target ignition-diskful.target. Dec 13 07:36:30.854875 systemd[1]: Mounted sysroot-usr.mount. Dec 13 07:36:30.856748 systemd[1]: Starting initrd-setup-root.service... Dec 13 07:36:30.866229 initrd-setup-root[760]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 07:36:30.879167 initrd-setup-root[768]: cut: /sysroot/etc/group: No such file or directory Dec 13 07:36:30.887739 initrd-setup-root[776]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 07:36:30.894779 initrd-setup-root[785]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 07:36:30.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:30.962664 systemd[1]: Finished initrd-setup-root.service. Dec 13 07:36:30.964786 systemd[1]: Starting ignition-mount.service... Dec 13 07:36:30.966429 systemd[1]: Starting sysroot-boot.service... Dec 13 07:36:30.979691 bash[803]: umount: /sysroot/usr/share/oem: not mounted. Dec 13 07:36:30.989440 coreos-metadata[755]: Dec 13 07:36:30.989 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 07:36:30.993150 ignition[804]: INFO : Ignition 2.14.0 Dec 13 07:36:30.993150 ignition[804]: INFO : Stage: mount Dec 13 07:36:30.994594 ignition[804]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 07:36:30.994594 ignition[804]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 07:36:30.996942 ignition[804]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 07:36:30.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:30.999560 ignition[804]: INFO : mount: mount passed Dec 13 07:36:30.999560 ignition[804]: INFO : Ignition finished successfully Dec 13 07:36:30.998375 systemd[1]: Finished ignition-mount.service. Dec 13 07:36:31.008776 coreos-metadata[755]: Dec 13 07:36:31.008 INFO Fetch successful Dec 13 07:36:31.010651 coreos-metadata[755]: Dec 13 07:36:31.009 INFO wrote hostname srv-ktue8.gb1.brightbox.com to /sysroot/etc/hostname Dec 13 07:36:31.012791 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Dec 13 07:36:31.012948 systemd[1]: Finished flatcar-openstack-hostname.service. Dec 13 07:36:31.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:31.014000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:31.016859 systemd[1]: Finished sysroot-boot.service. Dec 13 07:36:31.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:31.334886 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 07:36:31.346245 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (813) Dec 13 07:36:31.350623 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 07:36:31.350685 kernel: BTRFS info (device vda6): using free space tree Dec 13 07:36:31.350732 kernel: BTRFS info (device vda6): has skinny extents Dec 13 07:36:31.357835 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 07:36:31.359851 systemd[1]: Starting ignition-files.service... Dec 13 07:36:31.381481 ignition[833]: INFO : Ignition 2.14.0 Dec 13 07:36:31.381481 ignition[833]: INFO : Stage: files Dec 13 07:36:31.383238 ignition[833]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 07:36:31.383238 ignition[833]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 07:36:31.383238 ignition[833]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 07:36:31.386578 ignition[833]: DEBUG : files: compiled without relabeling support, skipping Dec 13 07:36:31.386578 ignition[833]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 07:36:31.386578 ignition[833]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 07:36:31.390666 ignition[833]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 07:36:31.391639 ignition[833]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 07:36:31.392529 ignition[833]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 07:36:31.392529 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 07:36:31.391662 unknown[833]: wrote ssh authorized keys file for user: core Dec 13 07:36:31.396919 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 07:36:31.396919 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 07:36:31.396919 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 07:36:31.541026 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 07:36:31.790843 systemd-networkd[710]: eth0: Gained IPv6LL Dec 13 07:36:31.833794 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 07:36:31.835285 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 07:36:31.836598 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 07:36:31.837641 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 07:36:31.837641 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 07:36:31.837641 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 07:36:31.840705 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 07:36:31.840705 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 07:36:31.840705 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 07:36:31.840705 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 07:36:31.840705 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 07:36:31.840705 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 07:36:31.840705 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 07:36:31.840705 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 07:36:31.840705 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 07:36:32.488205 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 07:36:33.297449 systemd-networkd[710]: eth0: Ignoring DHCPv6 address 2a02:1348:179:93bd:24:19ff:fee6:4ef6/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:93bd:24:19ff:fee6:4ef6/64 assigned by NDisc. Dec 13 07:36:33.297462 systemd-networkd[710]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 13 07:36:34.197317 ignition[833]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 07:36:34.199281 ignition[833]: INFO : files: op(c): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 07:36:34.200337 ignition[833]: INFO : files: op(c): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 07:36:34.200337 ignition[833]: INFO : files: op(d): [started] processing unit "containerd.service" Dec 13 07:36:34.205666 ignition[833]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 07:36:34.205666 ignition[833]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 07:36:34.205666 ignition[833]: INFO : files: op(d): [finished] processing unit "containerd.service" Dec 13 07:36:34.205666 ignition[833]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Dec 13 07:36:34.205666 ignition[833]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 07:36:34.205666 ignition[833]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 07:36:34.205666 ignition[833]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Dec 13 07:36:34.205666 ignition[833]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 13 07:36:34.205666 ignition[833]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 07:36:34.205666 ignition[833]: INFO : files: op(12): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 07:36:34.205666 ignition[833]: INFO : files: op(12): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 07:36:34.251437 kernel: kauditd_printk_skb: 27 callbacks suppressed Dec 13 07:36:34.251471 kernel: audit: type=1130 audit(1734075394.217:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:34.251511 kernel: audit: type=1130 audit(1734075394.234:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:34.251531 kernel: audit: type=1131 audit(1734075394.234:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:34.251548 kernel: audit: type=1130 audit(1734075394.244:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:34.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:34.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:34.234000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:34.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:34.251797 ignition[833]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 07:36:34.251797 ignition[833]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 07:36:34.251797 ignition[833]: INFO : files: files passed Dec 13 07:36:34.251797 ignition[833]: INFO : Ignition finished successfully Dec 13 07:36:34.214997 systemd[1]: Finished ignition-files.service. Dec 13 07:36:34.220501 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 07:36:34.257370 initrd-setup-root-after-ignition[858]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 07:36:34.228632 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 07:36:34.229802 systemd[1]: Starting ignition-quench.service... Dec 13 07:36:34.234011 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 07:36:34.234133 systemd[1]: Finished ignition-quench.service. Dec 13 07:36:34.235109 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 07:36:34.244840 systemd[1]: Reached target ignition-complete.target. Dec 13 07:36:34.251591 systemd[1]: Starting initrd-parse-etc.service... Dec 13 07:36:34.271052 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 07:36:34.272034 systemd[1]: Finished initrd-parse-etc.service. Dec 13 07:36:34.284361 kernel: audit: type=1130 audit(1734075394.272:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:34.284396 kernel: audit: type=1131 audit(1734075394.272:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:34.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:34.272000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:34.272943 systemd[1]: Reached target initrd-fs.target. Dec 13 07:36:34.283159 systemd[1]: Reached target initrd.target. Dec 13 07:36:34.283877 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 07:36:34.285186 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 07:36:34.304296 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 07:36:34.311213 kernel: audit: type=1130 audit(1734075394.304:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:34.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:34.306009 systemd[1]: Starting initrd-cleanup.service... Dec 13 07:36:34.318296 systemd[1]: Stopped target nss-lookup.target. Dec 13 07:36:34.319027 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 07:36:34.320382 systemd[1]: Stopped target timers.target. Dec 13 07:36:34.321616 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 07:36:34.327854 kernel: audit: type=1131 audit(1734075394.322:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:34.322000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:34.321767 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 07:36:34.322965 systemd[1]: Stopped target initrd.target. Dec 13 07:36:34.328558 systemd[1]: Stopped target basic.target. Dec 13 07:36:34.329729 systemd[1]: Stopped target ignition-complete.target. Dec 13 07:36:34.330922 systemd[1]: Stopped target ignition-diskful.target. Dec 13 07:36:34.332099 systemd[1]: Stopped target initrd-root-device.target. Dec 13 07:36:34.333300 systemd[1]: Stopped target remote-fs.target. Dec 13 07:36:34.334598 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 07:36:34.335822 systemd[1]: Stopped target sysinit.target. Dec 13 07:36:34.337376 systemd[1]: Stopped target local-fs.target. Dec 13 07:36:34.338551 systemd[1]: Stopped target local-fs-pre.target. Dec 13 07:36:34.339718 systemd[1]: Stopped target swap.target. Dec 13 07:36:34.347061 kernel: audit: type=1131 audit(1734075394.341:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:34.341000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:34.340775 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 07:36:34.341017 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 07:36:34.354285 kernel: audit: type=1131 audit(1734075394.348:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:34.348000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:34.342240 systemd[1]: Stopped target cryptsetup.target. Dec 13 07:36:34.354000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:34.347849 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 07:36:34.356000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:34.348113 systemd[1]: Stopped dracut-initqueue.service. Dec 13 07:36:34.349259 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 07:36:34.349471 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 07:36:34.355234 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 07:36:34.355462 systemd[1]: Stopped ignition-files.service. Dec 13 07:36:34.357602 systemd[1]: Stopping ignition-mount.service... Dec 13 07:36:34.364159 systemd[1]: Stopping iscsiuio.service... Dec 13 07:36:34.365441 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 07:36:34.366417 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 07:36:34.367000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:34.369365 systemd[1]: Stopping sysroot-boot.service... Dec 13 07:36:34.370693 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 07:36:34.371499 ignition[871]: INFO : Ignition 2.14.0 Dec 13 07:36:34.371499 ignition[871]: INFO : Stage: umount Dec 13 07:36:34.371499 ignition[871]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 07:36:34.371499 ignition[871]: DEBUG : parsing config with SHA512: ce918cf8568bff1426dda9ea05b778568a1626fcf4c1bded9ebe13fee104bc1b92fac5f7093a3bfc7d99777c3793d01249c863845c2ca48413d9477d40af178a Dec 13 07:36:34.375042 ignition[871]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 07:36:34.375042 ignition[871]: INFO : umount: umount passed Dec 13 07:36:34.375042 ignition[871]: INFO : Ignition finished successfully Dec 13 07:36:34.377000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:34.379000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:34.376302 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 07:36:34.377801 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 07:36:34.378013 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 07:36:34.384731 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 07:36:34.384882 systemd[1]: Stopped iscsiuio.service. Dec 13 07:36:34.386000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:34.387663 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 07:36:34.389000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:34.387780 systemd[1]: Stopped ignition-mount.service. Dec 13 07:36:34.391354 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 07:36:34.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:34.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:34.391474 systemd[1]: Finished initrd-cleanup.service. Dec 13 07:36:34.395374 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 07:36:34.397367 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 07:36:34.397474 systemd[1]: Stopped ignition-disks.service. Dec 13 07:36:34.399000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:34.399359 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 07:36:34.399000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:34.399430 systemd[1]: Stopped ignition-kargs.service. Dec 13 07:36:34.401000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:34.400112 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 07:36:34.400168 systemd[1]: Stopped ignition-fetch.service. Dec 13 07:36:34.404000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:34.401420 systemd[1]: Stopped target network.target. Dec 13 07:36:34.402503 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 07:36:34.402570 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 07:36:34.404485 systemd[1]: Stopped target paths.target. Dec 13 07:36:34.405481 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 07:36:34.409331 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 07:36:34.410034 systemd[1]: Stopped target slices.target. Dec 13 07:36:34.411257 systemd[1]: Stopped target sockets.target. Dec 13 07:36:34.413124 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 07:36:34.413226 systemd[1]: Closed iscsid.socket. Dec 13 07:36:34.414431 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 07:36:34.414492 systemd[1]: Closed iscsiuio.socket. Dec 13 07:36:34.416000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:34.415489 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 07:36:34.415561 systemd[1]: Stopped ignition-setup.service. Dec 13 07:36:34.416902 systemd[1]: Stopping systemd-networkd.service... Dec 13 07:36:34.419489 systemd[1]: Stopping systemd-resolved.service... Dec 13 07:36:34.422265 systemd-networkd[710]: eth0: DHCPv6 lease lost Dec 13 07:36:34.424946 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 07:36:34.425121 systemd[1]: Stopped systemd-resolved.service. Dec 13 07:36:34.426000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:34.427050 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 07:36:34.427000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:34.428000 audit: BPF prog-id=6 op=UNLOAD Dec 13 07:36:34.427177 systemd[1]: Stopped systemd-networkd.service. Dec 13 07:36:34.429000 audit: BPF prog-id=9 op=UNLOAD Dec 13 07:36:34.428475 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 07:36:34.433000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:34.428524 systemd[1]: Closed systemd-networkd.socket. Dec 13 07:36:34.434000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:34.430698 systemd[1]: Stopping network-cleanup.service... Dec 13 07:36:34.433254 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 07:36:34.433335 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 07:36:34.434005 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 07:36:34.434069 systemd[1]: Stopped systemd-sysctl.service. Dec 13 07:36:34.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:34.435035 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 07:36:34.435092 systemd[1]: Stopped systemd-modules-load.service. Dec 13 07:36:34.439339 systemd[1]: Stopping systemd-udevd.service... Dec 13 07:36:34.444630 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 07:36:34.449520 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 07:36:34.449684 systemd[1]: Stopped network-cleanup.service. Dec 13 07:36:34.452000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:34.453171 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 07:36:34.453406 systemd[1]: Stopped systemd-udevd.service. Dec 13 07:36:34.454000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:34.455146 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 07:36:34.455239 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 07:36:34.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:34.455894 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 07:36:34.472000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:34.455947 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 07:36:34.473000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:34.478000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:34.457274 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 07:36:34.457339 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 07:36:34.471709 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 07:36:34.471788 systemd[1]: Stopped dracut-cmdline.service. Dec 13 07:36:34.472919 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 07:36:34.473007 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 07:36:34.475479 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 07:36:34.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:34.485000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:34.478237 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 07:36:34.478318 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 07:36:34.485075 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 07:36:34.485217 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 07:36:34.559392 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 07:36:34.559583 systemd[1]: Stopped sysroot-boot.service. Dec 13 07:36:34.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:34.561233 systemd[1]: Reached target initrd-switch-root.target. Dec 13 07:36:34.562108 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 07:36:34.563000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:34.562173 systemd[1]: Stopped initrd-setup-root.service. Dec 13 07:36:34.564589 systemd[1]: Starting initrd-switch-root.service... Dec 13 07:36:34.575580 systemd[1]: Switching root. Dec 13 07:36:34.580000 audit: BPF prog-id=8 op=UNLOAD Dec 13 07:36:34.580000 audit: BPF prog-id=7 op=UNLOAD Dec 13 07:36:34.581000 audit: BPF prog-id=5 op=UNLOAD Dec 13 07:36:34.581000 audit: BPF prog-id=4 op=UNLOAD Dec 13 07:36:34.581000 audit: BPF prog-id=3 op=UNLOAD Dec 13 07:36:34.595388 iscsid[715]: iscsid shutting down. Dec 13 07:36:34.596220 systemd-journald[202]: Received SIGTERM from PID 1 (systemd). Dec 13 07:36:34.596296 systemd-journald[202]: Journal stopped Dec 13 07:36:38.464159 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 07:36:38.464299 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 07:36:38.464331 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 07:36:38.464379 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 07:36:38.464399 kernel: SELinux: policy capability open_perms=1 Dec 13 07:36:38.464443 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 07:36:38.464485 kernel: SELinux: policy capability always_check_network=0 Dec 13 07:36:38.464504 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 07:36:38.464529 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 07:36:38.464560 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 07:36:38.464587 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 07:36:38.464609 systemd[1]: Successfully loaded SELinux policy in 59.189ms. Dec 13 07:36:38.464645 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.576ms. Dec 13 07:36:38.464680 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 07:36:38.464704 systemd[1]: Detected virtualization kvm. Dec 13 07:36:38.464725 systemd[1]: Detected architecture x86-64. Dec 13 07:36:38.464746 systemd[1]: Detected first boot. Dec 13 07:36:38.464767 systemd[1]: Hostname set to . Dec 13 07:36:38.464788 systemd[1]: Initializing machine ID from VM UUID. Dec 13 07:36:38.464808 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 07:36:38.464839 systemd[1]: Populated /etc with preset unit settings. Dec 13 07:36:38.464863 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 07:36:38.464904 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 07:36:38.464930 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 07:36:38.464959 systemd[1]: Queued start job for default target multi-user.target. Dec 13 07:36:38.464982 systemd[1]: Unnecessary job was removed for dev-vda6.device. Dec 13 07:36:38.465003 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 07:36:38.465041 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 07:36:38.465063 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 07:36:38.465084 systemd[1]: Created slice system-getty.slice. Dec 13 07:36:38.465104 systemd[1]: Created slice system-modprobe.slice. Dec 13 07:36:38.465125 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 07:36:38.465159 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 07:36:38.465179 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 07:36:38.465200 systemd[1]: Created slice user.slice. Dec 13 07:36:38.466277 systemd[1]: Started systemd-ask-password-console.path. Dec 13 07:36:38.466327 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 07:36:38.466348 systemd[1]: Set up automount boot.automount. Dec 13 07:36:38.466367 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 07:36:38.466385 systemd[1]: Reached target integritysetup.target. Dec 13 07:36:38.466404 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 07:36:38.466422 systemd[1]: Reached target remote-fs.target. Dec 13 07:36:38.466452 systemd[1]: Reached target slices.target. Dec 13 07:36:38.466472 systemd[1]: Reached target swap.target. Dec 13 07:36:38.466490 systemd[1]: Reached target torcx.target. Dec 13 07:36:38.466521 systemd[1]: Reached target veritysetup.target. Dec 13 07:36:38.466541 systemd[1]: Listening on systemd-coredump.socket. Dec 13 07:36:38.466560 systemd[1]: Listening on systemd-initctl.socket. Dec 13 07:36:38.466596 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 07:36:38.466616 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 07:36:38.466636 systemd[1]: Listening on systemd-journald.socket. Dec 13 07:36:38.466656 systemd[1]: Listening on systemd-networkd.socket. Dec 13 07:36:38.466689 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 07:36:38.466711 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 07:36:38.466731 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 07:36:38.466752 systemd[1]: Mounting dev-hugepages.mount... Dec 13 07:36:38.466773 systemd[1]: Mounting dev-mqueue.mount... Dec 13 07:36:38.466794 systemd[1]: Mounting media.mount... Dec 13 07:36:38.466814 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 07:36:38.466834 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 07:36:38.466855 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 07:36:38.466895 systemd[1]: Mounting tmp.mount... Dec 13 07:36:38.466919 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 07:36:38.466941 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 07:36:38.466962 systemd[1]: Starting kmod-static-nodes.service... Dec 13 07:36:38.466982 systemd[1]: Starting modprobe@configfs.service... Dec 13 07:36:38.467002 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 07:36:38.467022 systemd[1]: Starting modprobe@drm.service... Dec 13 07:36:38.467043 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 07:36:38.467068 systemd[1]: Starting modprobe@fuse.service... Dec 13 07:36:38.467106 systemd[1]: Starting modprobe@loop.service... Dec 13 07:36:38.467130 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 07:36:38.467158 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Dec 13 07:36:38.467191 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Dec 13 07:36:38.470402 systemd[1]: Starting systemd-journald.service... Dec 13 07:36:38.470445 kernel: fuse: init (API version 7.34) Dec 13 07:36:38.470466 systemd[1]: Starting systemd-modules-load.service... Dec 13 07:36:38.470486 kernel: loop: module loaded Dec 13 07:36:38.470506 systemd[1]: Starting systemd-network-generator.service... Dec 13 07:36:38.470541 systemd[1]: Starting systemd-remount-fs.service... Dec 13 07:36:38.470563 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 07:36:38.470583 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 07:36:38.470616 systemd[1]: Mounted dev-hugepages.mount. Dec 13 07:36:38.470635 systemd[1]: Mounted dev-mqueue.mount. Dec 13 07:36:38.470656 systemd[1]: Mounted media.mount. Dec 13 07:36:38.470682 systemd-journald[1015]: Journal started Dec 13 07:36:38.470756 systemd-journald[1015]: Runtime Journal (/run/log/journal/f37f56e85fea4575aeccae7d3c049781) is 4.7M, max 38.1M, 33.3M free. Dec 13 07:36:38.264000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 07:36:38.264000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 07:36:38.456000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 07:36:38.456000 audit[1015]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffee52e3190 a2=4000 a3=7ffee52e322c items=0 ppid=1 pid=1015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:36:38.456000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 07:36:38.480365 systemd[1]: Started systemd-journald.service. Dec 13 07:36:38.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:38.480061 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 07:36:38.482768 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 07:36:38.483537 systemd[1]: Mounted tmp.mount. Dec 13 07:36:38.488993 systemd[1]: Finished kmod-static-nodes.service. Dec 13 07:36:38.490079 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 07:36:38.490367 systemd[1]: Finished modprobe@configfs.service. Dec 13 07:36:38.491444 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 07:36:38.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:38.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:38.490000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:38.492063 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 07:36:38.493104 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 07:36:38.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:38.492000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:38.494435 systemd[1]: Finished modprobe@drm.service. Dec 13 07:36:38.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:38.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:38.496020 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 07:36:38.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:38.497064 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 07:36:38.497334 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 07:36:38.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:38.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:38.498386 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 07:36:38.498633 systemd[1]: Finished modprobe@fuse.service. Dec 13 07:36:38.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:38.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:38.499654 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 07:36:38.499903 systemd[1]: Finished modprobe@loop.service. Dec 13 07:36:38.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:38.500000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:38.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:38.501515 systemd[1]: Finished systemd-modules-load.service. Dec 13 07:36:38.503587 systemd[1]: Finished systemd-network-generator.service. Dec 13 07:36:38.504000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:38.506781 systemd[1]: Finished systemd-remount-fs.service. Dec 13 07:36:38.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:38.508024 systemd[1]: Reached target network-pre.target. Dec 13 07:36:38.512092 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 07:36:38.514529 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 07:36:38.516460 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 07:36:38.520300 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 07:36:38.526731 systemd[1]: Starting systemd-journal-flush.service... Dec 13 07:36:38.527592 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 07:36:38.535385 systemd[1]: Starting systemd-random-seed.service... Dec 13 07:36:38.536291 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 07:36:38.538590 systemd[1]: Starting systemd-sysctl.service... Dec 13 07:36:38.542891 systemd[1]: Starting systemd-sysusers.service... Dec 13 07:36:38.549548 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 07:36:38.550351 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 07:36:38.573563 systemd-journald[1015]: Time spent on flushing to /var/log/journal/f37f56e85fea4575aeccae7d3c049781 is 100.474ms for 1224 entries. Dec 13 07:36:38.573563 systemd-journald[1015]: System Journal (/var/log/journal/f37f56e85fea4575aeccae7d3c049781) is 8.0M, max 584.8M, 576.8M free. Dec 13 07:36:38.685856 systemd-journald[1015]: Received client request to flush runtime journal. Dec 13 07:36:38.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:38.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:38.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:38.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:38.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:38.577791 systemd[1]: Finished systemd-random-seed.service. Dec 13 07:36:38.578693 systemd[1]: Reached target first-boot-complete.target. Dec 13 07:36:38.603772 systemd[1]: Finished systemd-sysctl.service. Dec 13 07:36:38.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:38.618422 systemd[1]: Finished systemd-sysusers.service. Dec 13 07:36:38.621541 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 07:36:38.674648 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 07:36:38.681185 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 07:36:38.683937 systemd[1]: Starting systemd-udev-settle.service... Dec 13 07:36:38.688485 systemd[1]: Finished systemd-journal-flush.service. Dec 13 07:36:38.701782 udevadm[1077]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 07:36:39.254164 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 07:36:39.262245 kernel: kauditd_printk_skb: 76 callbacks suppressed Dec 13 07:36:39.262434 kernel: audit: type=1130 audit(1734075399.255:115): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:39.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:39.258742 systemd[1]: Starting systemd-udevd.service... Dec 13 07:36:39.288819 systemd-udevd[1081]: Using default interface naming scheme 'v252'. Dec 13 07:36:39.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:39.321547 systemd[1]: Started systemd-udevd.service. Dec 13 07:36:39.324942 systemd[1]: Starting systemd-networkd.service... Dec 13 07:36:39.328212 kernel: audit: type=1130 audit(1734075399.322:116): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:39.338003 systemd[1]: Starting systemd-userdbd.service... Dec 13 07:36:39.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:39.403217 systemd[1]: Found device dev-ttyS0.device. Dec 13 07:36:39.403970 systemd[1]: Started systemd-userdbd.service. Dec 13 07:36:39.410234 kernel: audit: type=1130 audit(1734075399.404:117): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:39.544257 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 07:36:39.552072 systemd-networkd[1082]: lo: Link UP Dec 13 07:36:39.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:39.552713 systemd-networkd[1082]: lo: Gained carrier Dec 13 07:36:39.553576 systemd-networkd[1082]: Enumeration completed Dec 13 07:36:39.553765 systemd[1]: Started systemd-networkd.service. Dec 13 07:36:39.557881 systemd-networkd[1082]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 07:36:39.564138 kernel: audit: type=1130 audit(1734075399.556:118): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:39.564639 systemd-networkd[1082]: eth0: Link UP Dec 13 07:36:39.564770 systemd-networkd[1082]: eth0: Gained carrier Dec 13 07:36:39.574241 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Dec 13 07:36:39.574494 systemd-networkd[1082]: eth0: DHCPv4 address 10.230.78.246/30, gateway 10.230.78.245 acquired from 10.230.78.245 Dec 13 07:36:39.586249 kernel: ACPI: button: Power Button [PWRF] Dec 13 07:36:39.611714 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 07:36:39.640000 audit[1085]: AVC avc: denied { confidentiality } for pid=1085 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 07:36:39.650214 kernel: audit: type=1400 audit(1734075399.640:119): avc: denied { confidentiality } for pid=1085 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 07:36:39.640000 audit[1085]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55f87fd83ba0 a1=337fc a2=7f7b73feabc5 a3=5 items=110 ppid=1081 pid=1085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:36:39.640000 audit: CWD cwd="/" Dec 13 07:36:39.660124 kernel: audit: type=1300 audit(1734075399.640:119): arch=c000003e syscall=175 success=yes exit=0 a0=55f87fd83ba0 a1=337fc a2=7f7b73feabc5 a3=5 items=110 ppid=1081 pid=1085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:36:39.660188 kernel: audit: type=1307 audit(1734075399.640:119): cwd="/" Dec 13 07:36:39.660256 kernel: audit: type=1302 audit(1734075399.640:119): item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=1 name=(null) inode=16464 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.669785 kernel: audit: type=1302 audit(1734075399.640:119): item=1 name=(null) inode=16464 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=2 name=(null) inode=16464 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=3 name=(null) inode=16465 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.676301 kernel: audit: type=1302 audit(1734075399.640:119): item=2 name=(null) inode=16464 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=4 name=(null) inode=16464 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=5 name=(null) inode=16466 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=6 name=(null) inode=16464 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=7 name=(null) inode=16467 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=8 name=(null) inode=16467 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=9 name=(null) inode=16468 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=10 name=(null) inode=16467 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=11 name=(null) inode=16469 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=12 name=(null) inode=16467 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=13 name=(null) inode=16470 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=14 name=(null) inode=16467 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=15 name=(null) inode=16471 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=16 name=(null) inode=16467 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=17 name=(null) inode=16472 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=18 name=(null) inode=16464 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=19 name=(null) inode=16473 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=20 name=(null) inode=16473 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=21 name=(null) inode=16474 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=22 name=(null) inode=16473 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=23 name=(null) inode=16475 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=24 name=(null) inode=16473 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=25 name=(null) inode=16476 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=26 name=(null) inode=16473 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=27 name=(null) inode=16477 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=28 name=(null) inode=16473 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=29 name=(null) inode=16478 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=30 name=(null) inode=16464 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=31 name=(null) inode=16479 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=32 name=(null) inode=16479 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=33 name=(null) inode=16480 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=34 name=(null) inode=16479 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=35 name=(null) inode=16481 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=36 name=(null) inode=16479 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=37 name=(null) inode=16482 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=38 name=(null) inode=16479 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=39 name=(null) inode=16483 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=40 name=(null) inode=16479 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=41 name=(null) inode=16484 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=42 name=(null) inode=16464 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=43 name=(null) inode=16485 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=44 name=(null) inode=16485 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=45 name=(null) inode=16486 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=46 name=(null) inode=16485 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=47 name=(null) inode=16487 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=48 name=(null) inode=16485 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=49 name=(null) inode=16488 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=50 name=(null) inode=16485 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=51 name=(null) inode=16489 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=52 name=(null) inode=16485 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=53 name=(null) inode=16490 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=55 name=(null) inode=16491 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=56 name=(null) inode=16491 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=57 name=(null) inode=16492 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=58 name=(null) inode=16491 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=59 name=(null) inode=16493 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=60 name=(null) inode=16491 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=61 name=(null) inode=16494 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=62 name=(null) inode=16494 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=63 name=(null) inode=16495 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=64 name=(null) inode=16494 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=65 name=(null) inode=16496 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=66 name=(null) inode=16494 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=67 name=(null) inode=16497 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=68 name=(null) inode=16494 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=69 name=(null) inode=16498 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=70 name=(null) inode=16494 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=71 name=(null) inode=16499 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=72 name=(null) inode=16491 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=73 name=(null) inode=16500 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=74 name=(null) inode=16500 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=75 name=(null) inode=16501 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=76 name=(null) inode=16500 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=77 name=(null) inode=16502 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=78 name=(null) inode=16500 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.682219 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 07:36:39.688092 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 07:36:39.688448 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 07:36:39.640000 audit: PATH item=79 name=(null) inode=16503 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=80 name=(null) inode=16500 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=81 name=(null) inode=16504 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=82 name=(null) inode=16500 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=83 name=(null) inode=16505 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=84 name=(null) inode=16491 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=85 name=(null) inode=16506 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=86 name=(null) inode=16506 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=87 name=(null) inode=16507 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=88 name=(null) inode=16506 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=89 name=(null) inode=16508 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=90 name=(null) inode=16506 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=91 name=(null) inode=16509 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=92 name=(null) inode=16506 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=93 name=(null) inode=16510 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=94 name=(null) inode=16506 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=95 name=(null) inode=16511 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=96 name=(null) inode=16491 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=97 name=(null) inode=16512 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=98 name=(null) inode=16512 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=99 name=(null) inode=16513 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=100 name=(null) inode=16512 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=101 name=(null) inode=16514 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=102 name=(null) inode=16512 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=103 name=(null) inode=16515 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=104 name=(null) inode=16512 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=105 name=(null) inode=16516 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=106 name=(null) inode=16512 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=107 name=(null) inode=16517 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PATH item=109 name=(null) inode=16518 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:36:39.640000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 07:36:39.703223 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Dec 13 07:36:39.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:39.901508 systemd[1]: Finished systemd-udev-settle.service. Dec 13 07:36:39.906185 systemd[1]: Starting lvm2-activation-early.service... Dec 13 07:36:39.932552 lvm[1111]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 07:36:39.969010 systemd[1]: Finished lvm2-activation-early.service. Dec 13 07:36:39.970048 systemd[1]: Reached target cryptsetup.target. Dec 13 07:36:39.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:39.973123 systemd[1]: Starting lvm2-activation.service... Dec 13 07:36:39.981585 lvm[1113]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 07:36:40.013805 systemd[1]: Finished lvm2-activation.service. Dec 13 07:36:40.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:40.014704 systemd[1]: Reached target local-fs-pre.target. Dec 13 07:36:40.015386 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 07:36:40.015428 systemd[1]: Reached target local-fs.target. Dec 13 07:36:40.016006 systemd[1]: Reached target machines.target. Dec 13 07:36:40.018976 systemd[1]: Starting ldconfig.service... Dec 13 07:36:40.020273 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 07:36:40.020382 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 07:36:40.022206 systemd[1]: Starting systemd-boot-update.service... Dec 13 07:36:40.024404 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 07:36:40.031996 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 07:36:40.036437 systemd[1]: Starting systemd-sysext.service... Dec 13 07:36:40.037869 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1116 (bootctl) Dec 13 07:36:40.041045 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 07:36:40.058412 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 07:36:40.064233 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 07:36:40.064535 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 07:36:40.180812 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 07:36:40.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:40.186233 kernel: loop0: detected capacity change from 0 to 211296 Dec 13 07:36:40.199968 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 07:36:40.201613 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 07:36:40.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:40.229554 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 07:36:40.251421 kernel: loop1: detected capacity change from 0 to 211296 Dec 13 07:36:40.261069 systemd-fsck[1129]: fsck.fat 4.2 (2021-01-31) Dec 13 07:36:40.261069 systemd-fsck[1129]: /dev/vda1: 789 files, 119291/258078 clusters Dec 13 07:36:40.266221 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 07:36:40.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:40.269052 systemd[1]: Mounting boot.mount... Dec 13 07:36:40.279351 (sd-sysext)[1133]: Using extensions 'kubernetes'. Dec 13 07:36:40.281438 (sd-sysext)[1133]: Merged extensions into '/usr'. Dec 13 07:36:40.307166 systemd[1]: Mounted boot.mount. Dec 13 07:36:40.317244 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 07:36:40.319920 systemd[1]: Mounting usr-share-oem.mount... Dec 13 07:36:40.322656 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 07:36:40.324641 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 07:36:40.328857 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 07:36:40.332502 systemd[1]: Starting modprobe@loop.service... Dec 13 07:36:40.333352 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 07:36:40.333614 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 07:36:40.333858 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 07:36:40.337655 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 07:36:40.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:40.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:40.337905 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 07:36:40.339302 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 07:36:40.339506 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 07:36:40.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:40.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:40.343065 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 07:36:40.343365 systemd[1]: Finished modprobe@loop.service. Dec 13 07:36:40.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:40.343000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:40.344584 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 07:36:40.344741 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 07:36:40.355927 systemd[1]: Mounted usr-share-oem.mount. Dec 13 07:36:40.358155 systemd[1]: Finished systemd-sysext.service. Dec 13 07:36:40.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:40.363951 systemd[1]: Starting ensure-sysext.service... Dec 13 07:36:40.370592 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 07:36:40.381786 systemd[1]: Reloading. Dec 13 07:36:40.407047 systemd-tmpfiles[1152]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 07:36:40.418299 systemd-tmpfiles[1152]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 07:36:40.428935 systemd-tmpfiles[1152]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 07:36:40.496912 /usr/lib/systemd/system-generators/torcx-generator[1176]: time="2024-12-13T07:36:40Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 07:36:40.496971 /usr/lib/systemd/system-generators/torcx-generator[1176]: time="2024-12-13T07:36:40Z" level=info msg="torcx already run" Dec 13 07:36:40.630536 ldconfig[1115]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 07:36:40.647252 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 07:36:40.647284 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 07:36:40.675256 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 07:36:40.753901 systemd-networkd[1082]: eth0: Gained IPv6LL Dec 13 07:36:40.774854 systemd[1]: Finished ldconfig.service. Dec 13 07:36:40.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:40.776455 systemd[1]: Finished systemd-boot-update.service. Dec 13 07:36:40.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:40.779649 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 07:36:40.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:40.785282 systemd[1]: Starting audit-rules.service... Dec 13 07:36:40.788252 systemd[1]: Starting clean-ca-certificates.service... Dec 13 07:36:40.791945 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 07:36:40.796004 systemd[1]: Starting systemd-resolved.service... Dec 13 07:36:40.799657 systemd[1]: Starting systemd-timesyncd.service... Dec 13 07:36:40.803049 systemd[1]: Starting systemd-update-utmp.service... Dec 13 07:36:40.825103 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 07:36:40.830139 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 07:36:40.832922 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 07:36:40.842485 systemd[1]: Starting modprobe@loop.service... Dec 13 07:36:40.845715 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 07:36:40.845993 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 07:36:40.851499 systemd[1]: Finished clean-ca-certificates.service. Dec 13 07:36:40.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:40.853586 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 07:36:40.853866 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 07:36:40.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:40.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:40.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:40.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:40.856472 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 07:36:40.856730 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 07:36:40.858048 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 07:36:40.858201 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 07:36:40.861943 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 07:36:40.864069 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 07:36:40.870021 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 07:36:40.870793 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 07:36:40.871055 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 07:36:40.871302 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 07:36:40.879486 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 07:36:40.883738 systemd[1]: Starting modprobe@drm.service... Dec 13 07:36:40.884609 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 07:36:40.884844 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 07:36:40.892817 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 07:36:40.893701 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 07:36:40.895553 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 07:36:40.895835 systemd[1]: Finished modprobe@loop.service. Dec 13 07:36:40.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:40.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:40.898849 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 07:36:40.899109 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 07:36:40.899000 audit[1233]: SYSTEM_BOOT pid=1233 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 07:36:40.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:40.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:40.901577 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 07:36:40.902363 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 07:36:40.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:40.902000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:40.907648 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 07:36:40.907931 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 07:36:40.910764 systemd[1]: Finished ensure-sysext.service. Dec 13 07:36:40.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:40.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:40.915544 systemd[1]: Finished systemd-update-utmp.service. Dec 13 07:36:40.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:40.918283 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 07:36:40.919378 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 07:36:40.919661 systemd[1]: Finished modprobe@drm.service. Dec 13 07:36:40.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:40.921000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:40.934426 systemd[1]: Starting systemd-update-done.service... Dec 13 07:36:40.947671 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 07:36:40.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:40.956313 systemd[1]: Finished systemd-update-done.service. Dec 13 07:36:40.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:36:41.013000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 07:36:41.013000 audit[1269]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff09410fd0 a2=420 a3=0 items=0 ppid=1228 pid=1269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:36:41.013000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 07:36:41.014147 augenrules[1269]: No rules Dec 13 07:36:41.015226 systemd[1]: Finished audit-rules.service. Dec 13 07:36:41.029472 systemd-resolved[1231]: Positive Trust Anchors: Dec 13 07:36:41.029507 systemd-resolved[1231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 07:36:41.029549 systemd-resolved[1231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 07:36:41.040097 systemd-resolved[1231]: Using system hostname 'srv-ktue8.gb1.brightbox.com'. Dec 13 07:36:41.043482 systemd[1]: Started systemd-resolved.service. Dec 13 07:36:41.044360 systemd[1]: Reached target network.target. Dec 13 07:36:41.044941 systemd[1]: Reached target network-online.target. Dec 13 07:36:41.045552 systemd[1]: Reached target nss-lookup.target. Dec 13 07:36:41.049521 systemd[1]: Started systemd-timesyncd.service. Dec 13 07:36:41.050298 systemd[1]: Reached target sysinit.target. Dec 13 07:36:41.064405 systemd[1]: Started motdgen.path. Dec 13 07:36:41.065072 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 07:36:41.065814 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 07:36:41.066462 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 07:36:41.066519 systemd[1]: Reached target paths.target. Dec 13 07:36:41.067085 systemd[1]: Reached target time-set.target. Dec 13 07:36:41.067988 systemd[1]: Started logrotate.timer. Dec 13 07:36:41.068682 systemd[1]: Started mdadm.timer. Dec 13 07:36:41.069250 systemd[1]: Reached target timers.target. Dec 13 07:36:41.070380 systemd[1]: Listening on dbus.socket. Dec 13 07:36:41.073053 systemd[1]: Starting docker.socket... Dec 13 07:36:41.075875 systemd[1]: Listening on sshd.socket. Dec 13 07:36:41.076571 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 07:36:41.079643 systemd[1]: Listening on docker.socket. Dec 13 07:36:41.080268 systemd[1]: Reached target sockets.target. Dec 13 07:36:41.080828 systemd[1]: Reached target basic.target. Dec 13 07:36:41.081648 systemd[1]: System is tainted: cgroupsv1 Dec 13 07:36:41.081684 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 07:36:41.081745 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 07:36:41.081782 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 07:36:41.083658 systemd[1]: Starting containerd.service... Dec 13 07:36:41.086838 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 07:36:41.089310 systemd[1]: Starting dbus.service... Dec 13 07:36:41.092396 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 07:36:41.096659 systemd[1]: Starting extend-filesystems.service... Dec 13 07:36:41.098075 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 07:36:41.102694 systemd[1]: Starting kubelet.service... Dec 13 07:36:41.107927 systemd[1]: Starting motdgen.service... Dec 13 07:36:41.114928 systemd[1]: Starting prepare-helm.service... Dec 13 07:36:41.127224 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 07:36:41.134605 systemd[1]: Starting sshd-keygen.service... Dec 13 07:36:41.141956 systemd[1]: Starting systemd-logind.service... Dec 13 07:36:41.142944 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 07:36:41.143106 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 07:36:41.147758 systemd[1]: Starting update-engine.service... Dec 13 07:36:41.153223 jq[1283]: false Dec 13 07:36:41.151841 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 07:36:41.152628 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 07:36:41.159802 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 07:36:41.160258 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 07:36:41.168794 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 07:36:41.172899 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 07:36:42.126927 systemd-resolved[1231]: Clock change detected. Flushing caches. Dec 13 07:36:42.127135 systemd-timesyncd[1232]: Contacted time server 162.159.200.1:123 (0.flatcar.pool.ntp.org). Dec 13 07:36:42.127239 systemd-timesyncd[1232]: Initial clock synchronization to Fri 2024-12-13 07:36:42.126382 UTC. Dec 13 07:36:42.132213 extend-filesystems[1285]: Found loop1 Dec 13 07:36:42.138616 jq[1304]: true Dec 13 07:36:42.142134 extend-filesystems[1285]: Found vda Dec 13 07:36:42.142134 extend-filesystems[1285]: Found vda1 Dec 13 07:36:42.142134 extend-filesystems[1285]: Found vda2 Dec 13 07:36:42.142134 extend-filesystems[1285]: Found vda3 Dec 13 07:36:42.142134 extend-filesystems[1285]: Found usr Dec 13 07:36:42.142134 extend-filesystems[1285]: Found vda4 Dec 13 07:36:42.142134 extend-filesystems[1285]: Found vda6 Dec 13 07:36:42.142134 extend-filesystems[1285]: Found vda7 Dec 13 07:36:42.142134 extend-filesystems[1285]: Found vda9 Dec 13 07:36:42.142134 extend-filesystems[1285]: Checking size of /dev/vda9 Dec 13 07:36:42.214486 tar[1310]: linux-amd64/helm Dec 13 07:36:42.217909 jq[1317]: true Dec 13 07:36:42.156533 dbus-daemon[1280]: [system] SELinux support is enabled Dec 13 07:36:42.156832 systemd[1]: Started dbus.service. Dec 13 07:36:42.202110 dbus-daemon[1280]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1082 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 07:36:42.163137 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 07:36:42.211042 dbus-daemon[1280]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 07:36:42.163633 systemd[1]: Finished motdgen.service. Dec 13 07:36:42.165959 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 07:36:42.166020 systemd[1]: Reached target system-config.target. Dec 13 07:36:42.166714 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 07:36:42.166752 systemd[1]: Reached target user-config.target. Dec 13 07:36:42.197811 systemd-networkd[1082]: eth0: Ignoring DHCPv6 address 2a02:1348:179:93bd:24:19ff:fee6:4ef6/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:93bd:24:19ff:fee6:4ef6/64 assigned by NDisc. Dec 13 07:36:42.197819 systemd-networkd[1082]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 13 07:36:42.218034 systemd[1]: Starting systemd-hostnamed.service... Dec 13 07:36:42.240539 extend-filesystems[1285]: Resized partition /dev/vda9 Dec 13 07:36:42.255523 extend-filesystems[1336]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 07:36:42.262921 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Dec 13 07:36:42.281143 update_engine[1302]: I1213 07:36:42.280544 1302 main.cc:92] Flatcar Update Engine starting Dec 13 07:36:42.288031 systemd[1]: Started update-engine.service. Dec 13 07:36:42.292456 systemd[1]: Started locksmithd.service. Dec 13 07:36:42.294091 update_engine[1302]: I1213 07:36:42.293571 1302 update_check_scheduler.cc:74] Next update check in 6m41s Dec 13 07:36:42.388583 bash[1347]: Updated "/home/core/.ssh/authorized_keys" Dec 13 07:36:42.390334 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 07:36:42.430432 env[1313]: time="2024-12-13T07:36:42.430296789Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 07:36:42.435220 systemd-logind[1301]: Watching system buttons on /dev/input/event2 (Power Button) Dec 13 07:36:42.435272 systemd-logind[1301]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 07:36:42.437346 systemd-logind[1301]: New seat seat0. Dec 13 07:36:42.442015 systemd[1]: Started systemd-logind.service. Dec 13 07:36:42.489916 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Dec 13 07:36:42.512767 extend-filesystems[1336]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 07:36:42.512767 extend-filesystems[1336]: old_desc_blocks = 1, new_desc_blocks = 8 Dec 13 07:36:42.512767 extend-filesystems[1336]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Dec 13 07:36:42.525298 extend-filesystems[1285]: Resized filesystem in /dev/vda9 Dec 13 07:36:42.516335 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 07:36:42.516720 systemd[1]: Finished extend-filesystems.service. Dec 13 07:36:42.557803 env[1313]: time="2024-12-13T07:36:42.557684412Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 07:36:42.558809 env[1313]: time="2024-12-13T07:36:42.558777809Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 07:36:42.563564 env[1313]: time="2024-12-13T07:36:42.563505716Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 07:36:42.573440 env[1313]: time="2024-12-13T07:36:42.573381998Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 07:36:42.574060 env[1313]: time="2024-12-13T07:36:42.574024077Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 07:36:42.574236 env[1313]: time="2024-12-13T07:36:42.574205282Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 07:36:42.574830 dbus-daemon[1280]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 07:36:42.575070 systemd[1]: Started systemd-hostnamed.service. Dec 13 07:36:42.577064 dbus-daemon[1280]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1331 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 07:36:42.577529 env[1313]: time="2024-12-13T07:36:42.577386179Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 07:36:42.577698 env[1313]: time="2024-12-13T07:36:42.577654401Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 07:36:42.578057 env[1313]: time="2024-12-13T07:36:42.578028439Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 07:36:42.578821 env[1313]: time="2024-12-13T07:36:42.578791583Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 07:36:42.579291 env[1313]: time="2024-12-13T07:36:42.579236463Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 07:36:42.579420 env[1313]: time="2024-12-13T07:36:42.579391130Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 07:36:42.579673 env[1313]: time="2024-12-13T07:36:42.579642686Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 07:36:42.581480 env[1313]: time="2024-12-13T07:36:42.581449006Z" level=info msg="metadata content store policy set" policy=shared Dec 13 07:36:42.585572 systemd[1]: Starting polkit.service... Dec 13 07:36:42.596343 env[1313]: time="2024-12-13T07:36:42.596293972Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 07:36:42.596571 env[1313]: time="2024-12-13T07:36:42.596539819Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 07:36:42.596704 env[1313]: time="2024-12-13T07:36:42.596672443Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 07:36:42.596988 env[1313]: time="2024-12-13T07:36:42.596958615Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 07:36:42.597131 env[1313]: time="2024-12-13T07:36:42.597099966Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 07:36:42.597313 env[1313]: time="2024-12-13T07:36:42.597277228Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 07:36:42.597486 env[1313]: time="2024-12-13T07:36:42.597452586Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 07:36:42.597626 env[1313]: time="2024-12-13T07:36:42.597596606Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 07:36:42.597752 env[1313]: time="2024-12-13T07:36:42.597722922Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 07:36:42.597900 env[1313]: time="2024-12-13T07:36:42.597858706Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 07:36:42.598040 env[1313]: time="2024-12-13T07:36:42.598009964Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 07:36:42.598199 env[1313]: time="2024-12-13T07:36:42.598169015Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 07:36:42.598548 env[1313]: time="2024-12-13T07:36:42.598519322Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 07:36:42.598848 env[1313]: time="2024-12-13T07:36:42.598820032Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 07:36:42.599563 env[1313]: time="2024-12-13T07:36:42.599531671Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 07:36:42.601026 env[1313]: time="2024-12-13T07:36:42.600992454Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 07:36:42.601166 env[1313]: time="2024-12-13T07:36:42.601134051Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 07:36:42.601379 env[1313]: time="2024-12-13T07:36:42.601347504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 07:36:42.601541 env[1313]: time="2024-12-13T07:36:42.601506229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 07:36:42.601710 env[1313]: time="2024-12-13T07:36:42.601673041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 07:36:42.601865 env[1313]: time="2024-12-13T07:36:42.601833972Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 07:36:42.602026 env[1313]: time="2024-12-13T07:36:42.601995187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 07:36:42.604117 env[1313]: time="2024-12-13T07:36:42.604081451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 07:36:42.604318 env[1313]: time="2024-12-13T07:36:42.604286050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 07:36:42.604983 env[1313]: time="2024-12-13T07:36:42.604950013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 07:36:42.605209 env[1313]: time="2024-12-13T07:36:42.605176243Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 07:36:42.605600 env[1313]: time="2024-12-13T07:36:42.605569859Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 07:36:42.610115 polkitd[1356]: Started polkitd version 121 Dec 13 07:36:42.610873 env[1313]: time="2024-12-13T07:36:42.610837651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 07:36:42.611053 env[1313]: time="2024-12-13T07:36:42.611022373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 07:36:42.611207 env[1313]: time="2024-12-13T07:36:42.611169510Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 07:36:42.611395 env[1313]: time="2024-12-13T07:36:42.611350854Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 07:36:42.611537 env[1313]: time="2024-12-13T07:36:42.611506417Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 07:36:42.611767 env[1313]: time="2024-12-13T07:36:42.611737529Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 07:36:42.612021 env[1313]: time="2024-12-13T07:36:42.611991455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 07:36:42.612688 env[1313]: time="2024-12-13T07:36:42.612581380Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 07:36:42.616075 env[1313]: time="2024-12-13T07:36:42.612964338Z" level=info msg="Connect containerd service" Dec 13 07:36:42.616075 env[1313]: time="2024-12-13T07:36:42.613082938Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 07:36:42.616075 env[1313]: time="2024-12-13T07:36:42.615098788Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 07:36:42.616075 env[1313]: time="2024-12-13T07:36:42.615258853Z" level=info msg="Start subscribing containerd event" Dec 13 07:36:42.616075 env[1313]: time="2024-12-13T07:36:42.615358256Z" level=info msg="Start recovering state" Dec 13 07:36:42.616075 env[1313]: time="2024-12-13T07:36:42.615479170Z" level=info msg="Start event monitor" Dec 13 07:36:42.616075 env[1313]: time="2024-12-13T07:36:42.615519534Z" level=info msg="Start snapshots syncer" Dec 13 07:36:42.616075 env[1313]: time="2024-12-13T07:36:42.615539096Z" level=info msg="Start cni network conf syncer for default" Dec 13 07:36:42.616075 env[1313]: time="2024-12-13T07:36:42.615552385Z" level=info msg="Start streaming server" Dec 13 07:36:42.624250 env[1313]: time="2024-12-13T07:36:42.622809499Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 07:36:42.624874 env[1313]: time="2024-12-13T07:36:42.624845269Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 07:36:42.629056 polkitd[1356]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 07:36:42.629207 polkitd[1356]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 07:36:42.630614 polkitd[1356]: Finished loading, compiling and executing 2 rules Dec 13 07:36:42.632004 dbus-daemon[1280]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 07:36:42.632364 systemd[1]: Started polkit.service. Dec 13 07:36:42.633789 env[1313]: time="2024-12-13T07:36:42.633750408Z" level=info msg="containerd successfully booted in 0.235581s" Dec 13 07:36:42.633910 systemd[1]: Started containerd.service. Dec 13 07:36:42.635410 polkitd[1356]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 07:36:42.653515 systemd-hostnamed[1331]: Hostname set to (static) Dec 13 07:36:42.732470 systemd[1]: Created slice system-sshd.slice. Dec 13 07:36:43.350854 tar[1310]: linux-amd64/LICENSE Dec 13 07:36:43.351795 tar[1310]: linux-amd64/README.md Dec 13 07:36:43.365651 systemd[1]: Finished prepare-helm.service. Dec 13 07:36:43.394023 locksmithd[1346]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 07:36:43.537249 systemd[1]: Started kubelet.service. Dec 13 07:36:44.356516 kubelet[1378]: E1213 07:36:44.356376 1378 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 07:36:44.359191 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 07:36:44.359516 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 07:36:44.813678 sshd_keygen[1320]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 07:36:44.840295 systemd[1]: Finished sshd-keygen.service. Dec 13 07:36:44.846025 systemd[1]: Starting issuegen.service... Dec 13 07:36:44.849068 systemd[1]: Started sshd@0-10.230.78.246:22-139.178.89.65:49912.service. Dec 13 07:36:44.858651 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 07:36:44.859061 systemd[1]: Finished issuegen.service. Dec 13 07:36:44.862523 systemd[1]: Starting systemd-user-sessions.service... Dec 13 07:36:44.876538 systemd[1]: Finished systemd-user-sessions.service. Dec 13 07:36:44.880140 systemd[1]: Started getty@tty1.service. Dec 13 07:36:44.884401 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 07:36:44.885759 systemd[1]: Reached target getty.target. Dec 13 07:36:45.762263 sshd[1395]: Accepted publickey for core from 139.178.89.65 port 49912 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 07:36:45.765189 sshd[1395]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 07:36:45.782654 systemd[1]: Created slice user-500.slice. Dec 13 07:36:45.786978 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 07:36:45.795040 systemd-logind[1301]: New session 1 of user core. Dec 13 07:36:45.805661 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 07:36:45.809947 systemd[1]: Starting user@500.service... Dec 13 07:36:45.818011 (systemd)[1409]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 07:36:45.940291 systemd[1409]: Queued start job for default target default.target. Dec 13 07:36:45.941973 systemd[1409]: Reached target paths.target. Dec 13 07:36:45.942183 systemd[1409]: Reached target sockets.target. Dec 13 07:36:45.942362 systemd[1409]: Reached target timers.target. Dec 13 07:36:45.942522 systemd[1409]: Reached target basic.target. Dec 13 07:36:45.942978 systemd[1]: Started user@500.service. Dec 13 07:36:45.944522 systemd[1409]: Reached target default.target. Dec 13 07:36:45.944745 systemd[1409]: Startup finished in 115ms. Dec 13 07:36:45.945361 systemd[1]: Started session-1.scope. Dec 13 07:36:46.574112 systemd[1]: Started sshd@1-10.230.78.246:22-139.178.89.65:49928.service. Dec 13 07:36:47.464675 sshd[1418]: Accepted publickey for core from 139.178.89.65 port 49928 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 07:36:47.466979 sshd[1418]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 07:36:47.474673 systemd-logind[1301]: New session 2 of user core. Dec 13 07:36:47.475746 systemd[1]: Started session-2.scope. Dec 13 07:36:48.083680 sshd[1418]: pam_unix(sshd:session): session closed for user core Dec 13 07:36:48.088557 systemd[1]: sshd@1-10.230.78.246:22-139.178.89.65:49928.service: Deactivated successfully. Dec 13 07:36:48.089730 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 07:36:48.091028 systemd-logind[1301]: Session 2 logged out. Waiting for processes to exit. Dec 13 07:36:48.092630 systemd-logind[1301]: Removed session 2. Dec 13 07:36:48.228672 systemd[1]: Started sshd@2-10.230.78.246:22-139.178.89.65:54728.service. Dec 13 07:36:49.116189 sshd[1425]: Accepted publickey for core from 139.178.89.65 port 54728 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 07:36:49.118983 sshd[1425]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 07:36:49.125538 systemd-logind[1301]: New session 3 of user core. Dec 13 07:36:49.126400 systemd[1]: Started session-3.scope. Dec 13 07:36:49.207659 coreos-metadata[1279]: Dec 13 07:36:49.207 WARN failed to locate config-drive, using the metadata service API instead Dec 13 07:36:49.260468 coreos-metadata[1279]: Dec 13 07:36:49.260 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Dec 13 07:36:49.283363 coreos-metadata[1279]: Dec 13 07:36:49.283 INFO Fetch successful Dec 13 07:36:49.283676 coreos-metadata[1279]: Dec 13 07:36:49.283 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 07:36:49.320823 coreos-metadata[1279]: Dec 13 07:36:49.320 INFO Fetch successful Dec 13 07:36:49.329920 unknown[1279]: wrote ssh authorized keys file for user: core Dec 13 07:36:49.343298 update-ssh-keys[1432]: Updated "/home/core/.ssh/authorized_keys" Dec 13 07:36:49.344829 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 07:36:49.346155 systemd[1]: Reached target multi-user.target. Dec 13 07:36:49.349770 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 07:36:49.363353 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 07:36:49.363673 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 07:36:49.370965 systemd[1]: Startup finished in 8.190s (kernel) + 13.689s (userspace) = 21.880s. Dec 13 07:36:49.736529 sshd[1425]: pam_unix(sshd:session): session closed for user core Dec 13 07:36:49.740731 systemd[1]: sshd@2-10.230.78.246:22-139.178.89.65:54728.service: Deactivated successfully. Dec 13 07:36:49.742199 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 07:36:49.743917 systemd-logind[1301]: Session 3 logged out. Waiting for processes to exit. Dec 13 07:36:49.745434 systemd-logind[1301]: Removed session 3. Dec 13 07:36:54.611473 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 07:36:54.612027 systemd[1]: Stopped kubelet.service. Dec 13 07:36:54.615541 systemd[1]: Starting kubelet.service... Dec 13 07:36:54.792832 systemd[1]: Started kubelet.service. Dec 13 07:36:54.895619 kubelet[1448]: E1213 07:36:54.895050 1448 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 07:36:54.900385 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 07:36:54.900693 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 07:36:59.883014 systemd[1]: Started sshd@3-10.230.78.246:22-139.178.89.65:36886.service. Dec 13 07:37:00.771617 sshd[1455]: Accepted publickey for core from 139.178.89.65 port 36886 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 07:37:00.773831 sshd[1455]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 07:37:00.782036 systemd-logind[1301]: New session 4 of user core. Dec 13 07:37:00.782994 systemd[1]: Started session-4.scope. Dec 13 07:37:01.392681 sshd[1455]: pam_unix(sshd:session): session closed for user core Dec 13 07:37:01.397062 systemd[1]: sshd@3-10.230.78.246:22-139.178.89.65:36886.service: Deactivated successfully. Dec 13 07:37:01.398649 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 07:37:01.400551 systemd-logind[1301]: Session 4 logged out. Waiting for processes to exit. Dec 13 07:37:01.402412 systemd-logind[1301]: Removed session 4. Dec 13 07:37:01.537899 systemd[1]: Started sshd@4-10.230.78.246:22-139.178.89.65:36902.service. Dec 13 07:37:02.425781 sshd[1462]: Accepted publickey for core from 139.178.89.65 port 36902 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 07:37:02.428644 sshd[1462]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 07:37:02.435797 systemd-logind[1301]: New session 5 of user core. Dec 13 07:37:02.436703 systemd[1]: Started session-5.scope. Dec 13 07:37:03.039624 sshd[1462]: pam_unix(sshd:session): session closed for user core Dec 13 07:37:03.043518 systemd[1]: sshd@4-10.230.78.246:22-139.178.89.65:36902.service: Deactivated successfully. Dec 13 07:37:03.044729 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 07:37:03.046109 systemd-logind[1301]: Session 5 logged out. Waiting for processes to exit. Dec 13 07:37:03.047365 systemd-logind[1301]: Removed session 5. Dec 13 07:37:03.186188 systemd[1]: Started sshd@5-10.230.78.246:22-139.178.89.65:36918.service. Dec 13 07:37:04.077789 sshd[1469]: Accepted publickey for core from 139.178.89.65 port 36918 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 07:37:04.080046 sshd[1469]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 07:37:04.088115 systemd[1]: Started session-6.scope. Dec 13 07:37:04.088746 systemd-logind[1301]: New session 6 of user core. Dec 13 07:37:04.700484 sshd[1469]: pam_unix(sshd:session): session closed for user core Dec 13 07:37:04.704499 systemd[1]: sshd@5-10.230.78.246:22-139.178.89.65:36918.service: Deactivated successfully. Dec 13 07:37:04.705795 systemd-logind[1301]: Session 6 logged out. Waiting for processes to exit. Dec 13 07:37:04.705899 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 07:37:04.709279 systemd-logind[1301]: Removed session 6. Dec 13 07:37:04.844388 systemd[1]: Started sshd@6-10.230.78.246:22-139.178.89.65:36928.service. Dec 13 07:37:05.069068 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 07:37:05.069384 systemd[1]: Stopped kubelet.service. Dec 13 07:37:05.072146 systemd[1]: Starting kubelet.service... Dec 13 07:37:05.224722 systemd[1]: Started kubelet.service. Dec 13 07:37:05.297313 kubelet[1486]: E1213 07:37:05.297240 1486 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 07:37:05.300284 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 07:37:05.300569 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 07:37:05.728161 sshd[1476]: Accepted publickey for core from 139.178.89.65 port 36928 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 07:37:05.729682 sshd[1476]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 07:37:05.737865 systemd[1]: Started session-7.scope. Dec 13 07:37:05.738348 systemd-logind[1301]: New session 7 of user core. Dec 13 07:37:06.215192 sudo[1495]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 07:37:06.215671 sudo[1495]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 07:37:06.236882 dbus-daemon[1280]: Э\u000e\xad\xb7U: received setenforce notice (enforcing=-1542866464) Dec 13 07:37:06.237299 sudo[1495]: pam_unix(sudo:session): session closed for user root Dec 13 07:37:06.381381 sshd[1476]: pam_unix(sshd:session): session closed for user core Dec 13 07:37:06.386750 systemd[1]: sshd@6-10.230.78.246:22-139.178.89.65:36928.service: Deactivated successfully. Dec 13 07:37:06.388407 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 07:37:06.390190 systemd-logind[1301]: Session 7 logged out. Waiting for processes to exit. Dec 13 07:37:06.392204 systemd-logind[1301]: Removed session 7. Dec 13 07:37:06.527529 systemd[1]: Started sshd@7-10.230.78.246:22-139.178.89.65:36930.service. Dec 13 07:37:07.417214 sshd[1499]: Accepted publickey for core from 139.178.89.65 port 36930 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 07:37:07.420235 sshd[1499]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 07:37:07.428586 systemd[1]: Started session-8.scope. Dec 13 07:37:07.429296 systemd-logind[1301]: New session 8 of user core. Dec 13 07:37:07.896265 sudo[1504]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 07:37:07.896647 sudo[1504]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 07:37:07.901538 sudo[1504]: pam_unix(sudo:session): session closed for user root Dec 13 07:37:07.909379 sudo[1503]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 07:37:07.910215 sudo[1503]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 07:37:07.924977 systemd[1]: Stopping audit-rules.service... Dec 13 07:37:07.926000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 13 07:37:07.928085 auditctl[1507]: No rules Dec 13 07:37:07.928617 kernel: kauditd_printk_skb: 146 callbacks suppressed Dec 13 07:37:07.928715 kernel: audit: type=1305 audit(1734075427.926:156): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 13 07:37:07.932993 kernel: audit: type=1300 audit(1734075427.926:156): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcb77ba200 a2=420 a3=0 items=0 ppid=1 pid=1507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:07.926000 audit[1507]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcb77ba200 a2=420 a3=0 items=0 ppid=1 pid=1507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:07.929336 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 07:37:07.929698 systemd[1]: Stopped audit-rules.service. Dec 13 07:37:07.932550 systemd[1]: Starting audit-rules.service... Dec 13 07:37:07.926000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Dec 13 07:37:07.928000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:37:07.947272 kernel: audit: type=1327 audit(1734075427.926:156): proctitle=2F7362696E2F617564697463746C002D44 Dec 13 07:37:07.947346 kernel: audit: type=1131 audit(1734075427.928:157): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:37:07.966812 augenrules[1525]: No rules Dec 13 07:37:07.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:37:07.967718 systemd[1]: Finished audit-rules.service. Dec 13 07:37:07.973900 kernel: audit: type=1130 audit(1734075427.967:158): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:37:07.973942 sudo[1503]: pam_unix(sudo:session): session closed for user root Dec 13 07:37:07.973000 audit[1503]: USER_END pid=1503 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 07:37:07.980947 kernel: audit: type=1106 audit(1734075427.973:159): pid=1503 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 07:37:07.981077 kernel: audit: type=1104 audit(1734075427.973:160): pid=1503 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 07:37:07.973000 audit[1503]: CRED_DISP pid=1503 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 07:37:08.118274 sshd[1499]: pam_unix(sshd:session): session closed for user core Dec 13 07:37:08.119000 audit[1499]: USER_END pid=1499 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:37:08.128721 systemd[1]: sshd@7-10.230.78.246:22-139.178.89.65:36930.service: Deactivated successfully. Dec 13 07:37:08.128973 kernel: audit: type=1106 audit(1734075428.119:161): pid=1499 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:37:08.119000 audit[1499]: CRED_DISP pid=1499 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:37:08.136002 kernel: audit: type=1104 audit(1734075428.119:162): pid=1499 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:37:08.130061 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 07:37:08.135207 systemd-logind[1301]: Session 8 logged out. Waiting for processes to exit. Dec 13 07:37:08.128000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.230.78.246:22-139.178.89.65:36930 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:37:08.142239 systemd-logind[1301]: Removed session 8. Dec 13 07:37:08.142981 kernel: audit: type=1131 audit(1734075428.128:163): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.230.78.246:22-139.178.89.65:36930 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:37:08.264290 systemd[1]: Started sshd@8-10.230.78.246:22-139.178.89.65:59394.service. Dec 13 07:37:08.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.230.78.246:22-139.178.89.65:59394 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:37:09.156000 audit[1532]: USER_ACCT pid=1532 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:37:09.157989 sshd[1532]: Accepted publickey for core from 139.178.89.65 port 59394 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 07:37:09.158000 audit[1532]: CRED_ACQ pid=1532 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:37:09.158000 audit[1532]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeb3b23610 a2=3 a3=0 items=0 ppid=1 pid=1532 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:09.158000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 07:37:09.160263 sshd[1532]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 07:37:09.167271 systemd-logind[1301]: New session 9 of user core. Dec 13 07:37:09.168182 systemd[1]: Started session-9.scope. Dec 13 07:37:09.175000 audit[1532]: USER_START pid=1532 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:37:09.178000 audit[1535]: CRED_ACQ pid=1535 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:37:09.635000 audit[1536]: USER_ACCT pid=1536 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 07:37:09.636826 sudo[1536]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 07:37:09.636000 audit[1536]: CRED_REFR pid=1536 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 07:37:09.637870 sudo[1536]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 07:37:09.640000 audit[1536]: USER_START pid=1536 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 07:37:09.683632 systemd[1]: Starting docker.service... Dec 13 07:37:09.749993 env[1546]: time="2024-12-13T07:37:09.749849947Z" level=info msg="Starting up" Dec 13 07:37:09.753575 env[1546]: time="2024-12-13T07:37:09.753433288Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 07:37:09.753575 env[1546]: time="2024-12-13T07:37:09.753474765Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 07:37:09.753575 env[1546]: time="2024-12-13T07:37:09.753505568Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 07:37:09.753575 env[1546]: time="2024-12-13T07:37:09.753529600Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 07:37:09.757983 env[1546]: time="2024-12-13T07:37:09.757952184Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 07:37:09.758112 env[1546]: time="2024-12-13T07:37:09.758085067Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 07:37:09.758234 env[1546]: time="2024-12-13T07:37:09.758204645Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 07:37:09.758363 env[1546]: time="2024-12-13T07:37:09.758336812Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 07:37:09.900284 env[1546]: time="2024-12-13T07:37:09.899432245Z" level=warning msg="Your kernel does not support cgroup blkio weight" Dec 13 07:37:09.900284 env[1546]: time="2024-12-13T07:37:09.899496455Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Dec 13 07:37:09.900719 env[1546]: time="2024-12-13T07:37:09.900602784Z" level=info msg="Loading containers: start." Dec 13 07:37:10.000000 audit[1578]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1578 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 07:37:10.000000 audit[1578]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffc7ddb4850 a2=0 a3=7ffc7ddb483c items=0 ppid=1546 pid=1578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:10.000000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 13 07:37:10.002000 audit[1580]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1580 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 07:37:10.002000 audit[1580]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffc71930d70 a2=0 a3=7ffc71930d5c items=0 ppid=1546 pid=1580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:10.002000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 13 07:37:10.005000 audit[1582]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1582 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 07:37:10.005000 audit[1582]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff5f0717f0 a2=0 a3=7fff5f0717dc items=0 ppid=1546 pid=1582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:10.005000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 13 07:37:10.008000 audit[1584]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1584 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 07:37:10.008000 audit[1584]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffed11854f0 a2=0 a3=7ffed11854dc items=0 ppid=1546 pid=1584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:10.008000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 13 07:37:10.011000 audit[1586]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1586 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 07:37:10.011000 audit[1586]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffdb4751710 a2=0 a3=7ffdb47516fc items=0 ppid=1546 pid=1586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:10.011000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Dec 13 07:37:10.032000 audit[1591]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1591 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 07:37:10.032000 audit[1591]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd621cbee0 a2=0 a3=7ffd621cbecc items=0 ppid=1546 pid=1591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:10.032000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Dec 13 07:37:10.042000 audit[1593]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1593 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 07:37:10.042000 audit[1593]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffffb9c210 a2=0 a3=7fffffb9c1fc items=0 ppid=1546 pid=1593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:10.042000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 13 07:37:10.045000 audit[1595]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1595 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 07:37:10.045000 audit[1595]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffe150ddaf0 a2=0 a3=7ffe150ddadc items=0 ppid=1546 pid=1595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:10.045000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 13 07:37:10.049000 audit[1597]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1597 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 07:37:10.049000 audit[1597]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffe52c64250 a2=0 a3=7ffe52c6423c items=0 ppid=1546 pid=1597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:10.049000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 13 07:37:10.059000 audit[1601]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1601 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 07:37:10.059000 audit[1601]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffec2e29160 a2=0 a3=7ffec2e2914c items=0 ppid=1546 pid=1601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:10.059000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Dec 13 07:37:10.064000 audit[1602]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1602 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 07:37:10.064000 audit[1602]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffce2e3fa90 a2=0 a3=7ffce2e3fa7c items=0 ppid=1546 pid=1602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:10.064000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 13 07:37:10.079982 kernel: Initializing XFRM netlink socket Dec 13 07:37:10.132519 env[1546]: time="2024-12-13T07:37:10.132453114Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 07:37:10.174000 audit[1611]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1611 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 07:37:10.174000 audit[1611]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffc565a9a20 a2=0 a3=7ffc565a9a0c items=0 ppid=1546 pid=1611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:10.174000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 13 07:37:10.186000 audit[1614]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1614 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 07:37:10.186000 audit[1614]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7fff8fd5ad40 a2=0 a3=7fff8fd5ad2c items=0 ppid=1546 pid=1614 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:10.186000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 13 07:37:10.191000 audit[1617]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1617 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 07:37:10.191000 audit[1617]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffeb02cf8a0 a2=0 a3=7ffeb02cf88c items=0 ppid=1546 pid=1617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:10.191000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Dec 13 07:37:10.194000 audit[1619]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1619 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 07:37:10.194000 audit[1619]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7fff215b9140 a2=0 a3=7fff215b912c items=0 ppid=1546 pid=1619 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:10.194000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Dec 13 07:37:10.197000 audit[1621]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1621 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 07:37:10.197000 audit[1621]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffe02835160 a2=0 a3=7ffe0283514c items=0 ppid=1546 pid=1621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:10.197000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 13 07:37:10.201000 audit[1623]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1623 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 07:37:10.201000 audit[1623]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffd6f087aa0 a2=0 a3=7ffd6f087a8c items=0 ppid=1546 pid=1623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:10.201000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 13 07:37:10.204000 audit[1625]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1625 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 07:37:10.204000 audit[1625]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffef5885cf0 a2=0 a3=7ffef5885cdc items=0 ppid=1546 pid=1625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:10.204000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Dec 13 07:37:10.215000 audit[1628]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1628 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 07:37:10.215000 audit[1628]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffeec8bee10 a2=0 a3=7ffeec8bedfc items=0 ppid=1546 pid=1628 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:10.215000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 13 07:37:10.218000 audit[1630]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1630 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 07:37:10.218000 audit[1630]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffec69743f0 a2=0 a3=7ffec69743dc items=0 ppid=1546 pid=1630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:10.218000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 13 07:37:10.222000 audit[1632]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1632 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 07:37:10.222000 audit[1632]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffe0de436d0 a2=0 a3=7ffe0de436bc items=0 ppid=1546 pid=1632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:10.222000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 13 07:37:10.225000 audit[1634]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1634 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 07:37:10.225000 audit[1634]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe32f1d640 a2=0 a3=7ffe32f1d62c items=0 ppid=1546 pid=1634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:10.225000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 13 07:37:10.227214 systemd-networkd[1082]: docker0: Link UP Dec 13 07:37:10.238000 audit[1638]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1638 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 07:37:10.238000 audit[1638]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc154760e0 a2=0 a3=7ffc154760cc items=0 ppid=1546 pid=1638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:10.238000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Dec 13 07:37:10.243000 audit[1639]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1639 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 07:37:10.243000 audit[1639]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd18ff8050 a2=0 a3=7ffd18ff803c items=0 ppid=1546 pid=1639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:10.243000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 13 07:37:10.245066 env[1546]: time="2024-12-13T07:37:10.245006316Z" level=info msg="Loading containers: done." Dec 13 07:37:10.274616 env[1546]: time="2024-12-13T07:37:10.274549824Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 07:37:10.275223 env[1546]: time="2024-12-13T07:37:10.275193995Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 07:37:10.275506 env[1546]: time="2024-12-13T07:37:10.275479507Z" level=info msg="Daemon has completed initialization" Dec 13 07:37:10.293169 systemd[1]: Started docker.service. Dec 13 07:37:10.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:37:10.298308 env[1546]: time="2024-12-13T07:37:10.298253026Z" level=info msg="API listen on /run/docker.sock" Dec 13 07:37:11.659426 env[1313]: time="2024-12-13T07:37:11.659213251Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 07:37:12.446521 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2490702764.mount: Deactivated successfully. Dec 13 07:37:12.694449 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 07:37:12.694000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:37:15.318622 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 07:37:15.318943 systemd[1]: Stopped kubelet.service. Dec 13 07:37:15.328786 kernel: kauditd_printk_skb: 85 callbacks suppressed Dec 13 07:37:15.329155 kernel: audit: type=1130 audit(1734075435.318:199): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:37:15.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:37:15.327011 systemd[1]: Starting kubelet.service... Dec 13 07:37:15.335073 kernel: audit: type=1131 audit(1734075435.318:200): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:37:15.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:37:15.488061 env[1313]: time="2024-12-13T07:37:15.486281865Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:37:15.508370 systemd[1]: Started kubelet.service. Dec 13 07:37:15.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:37:15.514059 kernel: audit: type=1130 audit(1734075435.507:201): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:37:15.610801 env[1313]: time="2024-12-13T07:37:15.609707377Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:37:15.612906 kubelet[1691]: E1213 07:37:15.612433 1691 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 07:37:15.614400 env[1313]: time="2024-12-13T07:37:15.614358223Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:37:15.616079 env[1313]: time="2024-12-13T07:37:15.616039609Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:37:15.617731 env[1313]: time="2024-12-13T07:37:15.617552315Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 07:37:15.622317 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 07:37:15.622598 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 07:37:15.622000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 07:37:15.628903 kernel: audit: type=1131 audit(1734075435.622:202): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 07:37:15.635526 env[1313]: time="2024-12-13T07:37:15.635470392Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 07:37:18.890777 env[1313]: time="2024-12-13T07:37:18.890657073Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:37:18.894481 env[1313]: time="2024-12-13T07:37:18.894419517Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:37:18.897708 env[1313]: time="2024-12-13T07:37:18.897647008Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:37:18.900736 env[1313]: time="2024-12-13T07:37:18.900699015Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:37:18.902249 env[1313]: time="2024-12-13T07:37:18.902191257Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 07:37:18.917343 env[1313]: time="2024-12-13T07:37:18.917275490Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 07:37:20.730809 env[1313]: time="2024-12-13T07:37:20.730716585Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:37:20.732928 env[1313]: time="2024-12-13T07:37:20.732891867Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:37:20.735319 env[1313]: time="2024-12-13T07:37:20.735283673Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:37:20.737760 env[1313]: time="2024-12-13T07:37:20.737722539Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:37:20.738866 env[1313]: time="2024-12-13T07:37:20.738819032Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 07:37:20.752694 env[1313]: time="2024-12-13T07:37:20.752642281Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 07:37:22.390919 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2416404984.mount: Deactivated successfully. Dec 13 07:37:23.600063 env[1313]: time="2024-12-13T07:37:23.599965166Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:37:23.603388 env[1313]: time="2024-12-13T07:37:23.603352717Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:37:23.606041 env[1313]: time="2024-12-13T07:37:23.606010122Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:37:23.608310 env[1313]: time="2024-12-13T07:37:23.608276757Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:37:23.609996 env[1313]: time="2024-12-13T07:37:23.609253975Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 07:37:23.631288 env[1313]: time="2024-12-13T07:37:23.631221144Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 07:37:24.278767 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2289698559.mount: Deactivated successfully. Dec 13 07:37:25.795984 env[1313]: time="2024-12-13T07:37:25.795875502Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:37:25.802704 env[1313]: time="2024-12-13T07:37:25.802640281Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:37:25.805293 env[1313]: time="2024-12-13T07:37:25.805235114Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:37:25.807946 env[1313]: time="2024-12-13T07:37:25.807905927Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:37:25.809237 env[1313]: time="2024-12-13T07:37:25.809193293Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 07:37:25.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:37:25.818815 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 07:37:25.819168 systemd[1]: Stopped kubelet.service. Dec 13 07:37:25.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:37:25.829453 systemd[1]: Starting kubelet.service... Dec 13 07:37:25.833269 kernel: audit: type=1130 audit(1734075445.818:203): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:37:25.833426 kernel: audit: type=1131 audit(1734075445.818:204): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:37:25.847124 env[1313]: time="2024-12-13T07:37:25.847056759Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 07:37:26.007232 systemd[1]: Started kubelet.service. Dec 13 07:37:26.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:37:26.012897 kernel: audit: type=1130 audit(1734075446.006:205): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:37:26.105547 kubelet[1732]: E1213 07:37:26.105478 1732 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 07:37:26.113928 kernel: audit: type=1131 audit(1734075446.108:206): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 07:37:26.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 07:37:26.108141 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 07:37:26.108507 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 07:37:26.477175 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2341429785.mount: Deactivated successfully. Dec 13 07:37:26.479181 env[1313]: time="2024-12-13T07:37:26.479133678Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:37:26.480942 env[1313]: time="2024-12-13T07:37:26.480907248Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:37:26.483728 env[1313]: time="2024-12-13T07:37:26.483667694Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:37:26.486623 env[1313]: time="2024-12-13T07:37:26.486589867Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:37:26.488824 env[1313]: time="2024-12-13T07:37:26.487966557Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 07:37:26.508175 env[1313]: time="2024-12-13T07:37:26.508099694Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 07:37:27.132344 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1700397621.mount: Deactivated successfully. Dec 13 07:37:27.506679 update_engine[1302]: I1213 07:37:27.506181 1302 update_attempter.cc:509] Updating boot flags... Dec 13 07:37:30.790869 env[1313]: time="2024-12-13T07:37:30.790737763Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:37:30.794340 env[1313]: time="2024-12-13T07:37:30.794300243Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:37:30.796721 env[1313]: time="2024-12-13T07:37:30.796685177Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:37:30.799952 env[1313]: time="2024-12-13T07:37:30.799903824Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:37:30.801280 env[1313]: time="2024-12-13T07:37:30.801229033Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 07:37:34.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:37:34.755129 systemd[1]: Stopped kubelet.service. Dec 13 07:37:34.754000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:37:34.762219 systemd[1]: Starting kubelet.service... Dec 13 07:37:34.766485 kernel: audit: type=1130 audit(1734075454.754:207): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:37:34.766602 kernel: audit: type=1131 audit(1734075454.754:208): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:37:34.797940 systemd[1]: Reloading. Dec 13 07:37:34.939336 /usr/lib/systemd/system-generators/torcx-generator[1855]: time="2024-12-13T07:37:34Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 07:37:34.940073 /usr/lib/systemd/system-generators/torcx-generator[1855]: time="2024-12-13T07:37:34Z" level=info msg="torcx already run" Dec 13 07:37:35.043238 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 07:37:35.043775 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 07:37:35.073115 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 07:37:35.217131 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 07:37:35.217538 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 07:37:35.218236 systemd[1]: Stopped kubelet.service. Dec 13 07:37:35.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 07:37:35.226987 kernel: audit: type=1130 audit(1734075455.217:209): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 07:37:35.230857 systemd[1]: Starting kubelet.service... Dec 13 07:37:35.554560 systemd[1]: Started kubelet.service. Dec 13 07:37:35.563078 kernel: audit: type=1130 audit(1734075455.554:210): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:37:35.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:37:35.645244 kubelet[1907]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 07:37:35.645244 kubelet[1907]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 07:37:35.645244 kubelet[1907]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 07:37:35.646242 kubelet[1907]: I1213 07:37:35.645334 1907 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 07:37:35.960355 kubelet[1907]: I1213 07:37:35.960295 1907 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 07:37:35.960355 kubelet[1907]: I1213 07:37:35.960344 1907 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 07:37:35.960724 kubelet[1907]: I1213 07:37:35.960693 1907 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 07:37:35.998453 kubelet[1907]: I1213 07:37:35.998390 1907 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 07:37:36.001410 kubelet[1907]: E1213 07:37:36.001339 1907 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.230.78.246:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.230.78.246:6443: connect: connection refused Dec 13 07:37:36.020007 kubelet[1907]: I1213 07:37:36.019956 1907 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 07:37:36.021110 kubelet[1907]: I1213 07:37:36.021087 1907 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 07:37:36.021541 kubelet[1907]: I1213 07:37:36.021508 1907 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 07:37:36.021891 kubelet[1907]: I1213 07:37:36.021856 1907 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 07:37:36.022021 kubelet[1907]: I1213 07:37:36.021999 1907 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 07:37:36.022354 kubelet[1907]: I1213 07:37:36.022331 1907 state_mem.go:36] "Initialized new in-memory state store" Dec 13 07:37:36.022674 kubelet[1907]: I1213 07:37:36.022651 1907 kubelet.go:396] "Attempting to sync node with API server" Dec 13 07:37:36.022833 kubelet[1907]: I1213 07:37:36.022810 1907 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 07:37:36.023041 kubelet[1907]: I1213 07:37:36.023018 1907 kubelet.go:312] "Adding apiserver pod source" Dec 13 07:37:36.024175 kubelet[1907]: I1213 07:37:36.024153 1907 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 07:37:36.026624 kubelet[1907]: I1213 07:37:36.026589 1907 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 07:37:36.027389 kubelet[1907]: W1213 07:37:36.026833 1907 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.230.78.246:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-ktue8.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.78.246:6443: connect: connection refused Dec 13 07:37:36.030902 kubelet[1907]: E1213 07:37:36.030861 1907 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.230.78.246:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-ktue8.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.78.246:6443: connect: connection refused Dec 13 07:37:36.031048 kubelet[1907]: W1213 07:37:36.027037 1907 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.230.78.246:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.78.246:6443: connect: connection refused Dec 13 07:37:36.031196 kubelet[1907]: E1213 07:37:36.031172 1907 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.230.78.246:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.78.246:6443: connect: connection refused Dec 13 07:37:36.031314 kubelet[1907]: I1213 07:37:36.030709 1907 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 07:37:36.033200 kubelet[1907]: W1213 07:37:36.033177 1907 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 07:37:36.034667 kubelet[1907]: I1213 07:37:36.034644 1907 server.go:1256] "Started kubelet" Dec 13 07:37:36.037083 kubelet[1907]: I1213 07:37:36.037053 1907 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 07:37:36.038463 kubelet[1907]: I1213 07:37:36.038438 1907 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 07:37:36.039156 kubelet[1907]: I1213 07:37:36.039133 1907 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 07:37:36.039315 kubelet[1907]: I1213 07:37:36.038515 1907 server.go:461] "Adding debug handlers to kubelet server" Dec 13 07:37:36.039000 audit[1907]: AVC avc: denied { mac_admin } for pid=1907 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:37:36.039000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 07:37:36.046213 kubelet[1907]: I1213 07:37:36.040526 1907 kubelet.go:1417] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Dec 13 07:37:36.046213 kubelet[1907]: I1213 07:37:36.040583 1907 kubelet.go:1421] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Dec 13 07:37:36.046213 kubelet[1907]: I1213 07:37:36.040734 1907 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 07:37:36.048158 kernel: audit: type=1400 audit(1734075456.039:211): avc: denied { mac_admin } for pid=1907 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:37:36.048265 kernel: audit: type=1401 audit(1734075456.039:211): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 07:37:36.048692 kubelet[1907]: E1213 07:37:36.048642 1907 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.78.246:6443/api/v1/namespaces/default/events\": dial tcp 10.230.78.246:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-ktue8.gb1.brightbox.com.1810ac7092aa638d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-ktue8.gb1.brightbox.com,UID:srv-ktue8.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-ktue8.gb1.brightbox.com,},FirstTimestamp:2024-12-13 07:37:36.034595725 +0000 UTC m=+0.460720184,LastTimestamp:2024-12-13 07:37:36.034595725 +0000 UTC m=+0.460720184,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-ktue8.gb1.brightbox.com,}" Dec 13 07:37:36.039000 audit[1907]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000aaa510 a1=c0008c3ed8 a2=c000aaa4e0 a3=25 items=0 ppid=1 pid=1907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:36.053659 kubelet[1907]: I1213 07:37:36.053632 1907 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 07:37:36.055297 kubelet[1907]: I1213 07:37:36.055271 1907 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 07:37:36.055553 kubelet[1907]: I1213 07:37:36.055530 1907 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 07:37:36.056900 kernel: audit: type=1300 audit(1734075456.039:211): arch=c000003e syscall=188 success=no exit=-22 a0=c000aaa510 a1=c0008c3ed8 a2=c000aaa4e0 a3=25 items=0 ppid=1 pid=1907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:36.039000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 07:37:36.058889 kubelet[1907]: W1213 07:37:36.058825 1907 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.230.78.246:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.78.246:6443: connect: connection refused Dec 13 07:37:36.059036 kubelet[1907]: E1213 07:37:36.059012 1907 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.230.78.246:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.78.246:6443: connect: connection refused Dec 13 07:37:36.059261 kubelet[1907]: E1213 07:37:36.059238 1907 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.78.246:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-ktue8.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.78.246:6443: connect: connection refused" interval="200ms" Dec 13 07:37:36.064922 kernel: audit: type=1327 audit(1734075456.039:211): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 07:37:36.065422 kubelet[1907]: I1213 07:37:36.065389 1907 factory.go:221] Registration of the systemd container factory successfully Dec 13 07:37:36.065542 kubelet[1907]: I1213 07:37:36.065510 1907 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 07:37:36.039000 audit[1907]: AVC avc: denied { mac_admin } for pid=1907 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:37:36.039000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 07:37:36.073128 kubelet[1907]: E1213 07:37:36.073099 1907 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 07:37:36.073720 kubelet[1907]: I1213 07:37:36.073700 1907 factory.go:221] Registration of the containerd container factory successfully Dec 13 07:37:36.074619 kernel: audit: type=1400 audit(1734075456.039:212): avc: denied { mac_admin } for pid=1907 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:37:36.074711 kernel: audit: type=1401 audit(1734075456.039:212): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 07:37:36.039000 audit[1907]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000912860 a1=c0008c3ef0 a2=c000aaa5a0 a3=25 items=0 ppid=1 pid=1907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:36.039000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 07:37:36.039000 audit[1918]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1918 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 07:37:36.039000 audit[1918]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc6d0ba040 a2=0 a3=7ffc6d0ba02c items=0 ppid=1907 pid=1918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:36.039000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 13 07:37:36.056000 audit[1919]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1919 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 07:37:36.056000 audit[1919]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffce7e78010 a2=0 a3=7ffce7e77ffc items=0 ppid=1907 pid=1919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:36.056000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 13 07:37:36.065000 audit[1921]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1921 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 07:37:36.065000 audit[1921]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffdce343980 a2=0 a3=7ffdce34396c items=0 ppid=1907 pid=1921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:36.065000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 07:37:36.066000 audit[1923]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1923 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 07:37:36.066000 audit[1923]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd88123260 a2=0 a3=7ffd8812324c items=0 ppid=1907 pid=1923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:36.066000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 07:37:36.104000 audit[1926]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1926 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 07:37:36.104000 audit[1926]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffefab77480 a2=0 a3=7ffefab7746c items=0 ppid=1907 pid=1926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:36.104000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 13 07:37:36.106977 kubelet[1907]: I1213 07:37:36.106940 1907 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 07:37:36.107000 audit[1928]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1928 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 07:37:36.107000 audit[1928]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc582cac60 a2=0 a3=7ffc582cac4c items=0 ppid=1907 pid=1928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:36.107000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 13 07:37:36.108965 kubelet[1907]: I1213 07:37:36.108943 1907 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 07:37:36.109132 kubelet[1907]: I1213 07:37:36.109109 1907 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 07:37:36.109289 kubelet[1907]: I1213 07:37:36.109264 1907 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 07:37:36.109486 kubelet[1907]: E1213 07:37:36.109464 1907 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 07:37:36.111000 audit[1929]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1929 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 07:37:36.111000 audit[1929]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeaeb1ebe0 a2=0 a3=7ffeaeb1ebcc items=0 ppid=1907 pid=1929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:36.111000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 13 07:37:36.112000 audit[1930]: NETFILTER_CFG table=nat:33 family=2 entries=1 op=nft_register_chain pid=1930 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 07:37:36.112000 audit[1930]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe7bd15020 a2=0 a3=7ffe7bd1500c items=0 ppid=1907 pid=1930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:36.112000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 13 07:37:36.116000 audit[1932]: NETFILTER_CFG table=mangle:34 family=10 entries=1 op=nft_register_chain pid=1932 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 07:37:36.116000 audit[1932]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd86874c40 a2=0 a3=7ffd86874c2c items=0 ppid=1907 pid=1932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:36.116000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 13 07:37:36.120000 audit[1933]: NETFILTER_CFG table=filter:35 family=2 entries=1 op=nft_register_chain pid=1933 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 07:37:36.120000 audit[1933]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd2e893ca0 a2=0 a3=7ffd2e893c8c items=0 ppid=1907 pid=1933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:36.120000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 13 07:37:36.121000 audit[1934]: NETFILTER_CFG table=nat:36 family=10 entries=2 op=nft_register_chain pid=1934 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 07:37:36.121000 audit[1934]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffc88755a30 a2=0 a3=7ffc88755a1c items=0 ppid=1907 pid=1934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:36.121000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 13 07:37:36.123000 audit[1935]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1935 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 07:37:36.123000 audit[1935]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff09e47470 a2=0 a3=7fff09e4745c items=0 ppid=1907 pid=1935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:36.123000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 13 07:37:36.126060 kubelet[1907]: W1213 07:37:36.126000 1907 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.230.78.246:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.78.246:6443: connect: connection refused Dec 13 07:37:36.126241 kubelet[1907]: E1213 07:37:36.126217 1907 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.230.78.246:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.78.246:6443: connect: connection refused Dec 13 07:37:36.135733 kubelet[1907]: I1213 07:37:36.135705 1907 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 07:37:36.135890 kubelet[1907]: I1213 07:37:36.135857 1907 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 07:37:36.136033 kubelet[1907]: I1213 07:37:36.136011 1907 state_mem.go:36] "Initialized new in-memory state store" Dec 13 07:37:36.137791 kubelet[1907]: I1213 07:37:36.137767 1907 policy_none.go:49] "None policy: Start" Dec 13 07:37:36.138690 kubelet[1907]: I1213 07:37:36.138666 1907 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 07:37:36.138839 kubelet[1907]: I1213 07:37:36.138817 1907 state_mem.go:35] "Initializing new in-memory state store" Dec 13 07:37:36.148861 kubelet[1907]: I1213 07:37:36.148830 1907 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 07:37:36.148000 audit[1907]: AVC avc: denied { mac_admin } for pid=1907 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:37:36.148000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 07:37:36.148000 audit[1907]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000d29080 a1=c000ac5020 a2=c000d29050 a3=25 items=0 ppid=1 pid=1907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:36.148000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 07:37:36.153097 kubelet[1907]: I1213 07:37:36.153069 1907 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Dec 13 07:37:36.153560 kubelet[1907]: I1213 07:37:36.153535 1907 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 07:37:36.160175 kubelet[1907]: E1213 07:37:36.160136 1907 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-ktue8.gb1.brightbox.com\" not found" Dec 13 07:37:36.160456 kubelet[1907]: I1213 07:37:36.160431 1907 kubelet_node_status.go:73] "Attempting to register node" node="srv-ktue8.gb1.brightbox.com" Dec 13 07:37:36.161100 kubelet[1907]: E1213 07:37:36.161075 1907 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.78.246:6443/api/v1/nodes\": dial tcp 10.230.78.246:6443: connect: connection refused" node="srv-ktue8.gb1.brightbox.com" Dec 13 07:37:36.211026 kubelet[1907]: I1213 07:37:36.210830 1907 topology_manager.go:215] "Topology Admit Handler" podUID="88163247478e2cf6abcca327d4caa98c" podNamespace="kube-system" podName="kube-apiserver-srv-ktue8.gb1.brightbox.com" Dec 13 07:37:36.215710 kubelet[1907]: I1213 07:37:36.215684 1907 topology_manager.go:215] "Topology Admit Handler" podUID="6679f3fef3323b5b73634d448e5a44ce" podNamespace="kube-system" podName="kube-controller-manager-srv-ktue8.gb1.brightbox.com" Dec 13 07:37:36.218389 kubelet[1907]: I1213 07:37:36.218363 1907 topology_manager.go:215] "Topology Admit Handler" podUID="b5cce23f7cfcb0ec8a4ee0c30ff19c66" podNamespace="kube-system" podName="kube-scheduler-srv-ktue8.gb1.brightbox.com" Dec 13 07:37:36.258095 kubelet[1907]: I1213 07:37:36.258038 1907 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6679f3fef3323b5b73634d448e5a44ce-k8s-certs\") pod \"kube-controller-manager-srv-ktue8.gb1.brightbox.com\" (UID: \"6679f3fef3323b5b73634d448e5a44ce\") " pod="kube-system/kube-controller-manager-srv-ktue8.gb1.brightbox.com" Dec 13 07:37:36.258095 kubelet[1907]: I1213 07:37:36.258107 1907 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6679f3fef3323b5b73634d448e5a44ce-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-ktue8.gb1.brightbox.com\" (UID: \"6679f3fef3323b5b73634d448e5a44ce\") " pod="kube-system/kube-controller-manager-srv-ktue8.gb1.brightbox.com" Dec 13 07:37:36.258375 kubelet[1907]: I1213 07:37:36.258143 1907 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b5cce23f7cfcb0ec8a4ee0c30ff19c66-kubeconfig\") pod \"kube-scheduler-srv-ktue8.gb1.brightbox.com\" (UID: \"b5cce23f7cfcb0ec8a4ee0c30ff19c66\") " pod="kube-system/kube-scheduler-srv-ktue8.gb1.brightbox.com" Dec 13 07:37:36.258375 kubelet[1907]: I1213 07:37:36.258173 1907 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/88163247478e2cf6abcca327d4caa98c-ca-certs\") pod \"kube-apiserver-srv-ktue8.gb1.brightbox.com\" (UID: \"88163247478e2cf6abcca327d4caa98c\") " pod="kube-system/kube-apiserver-srv-ktue8.gb1.brightbox.com" Dec 13 07:37:36.258375 kubelet[1907]: I1213 07:37:36.258207 1907 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6679f3fef3323b5b73634d448e5a44ce-ca-certs\") pod \"kube-controller-manager-srv-ktue8.gb1.brightbox.com\" (UID: \"6679f3fef3323b5b73634d448e5a44ce\") " pod="kube-system/kube-controller-manager-srv-ktue8.gb1.brightbox.com" Dec 13 07:37:36.258375 kubelet[1907]: I1213 07:37:36.258238 1907 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6679f3fef3323b5b73634d448e5a44ce-flexvolume-dir\") pod \"kube-controller-manager-srv-ktue8.gb1.brightbox.com\" (UID: \"6679f3fef3323b5b73634d448e5a44ce\") " pod="kube-system/kube-controller-manager-srv-ktue8.gb1.brightbox.com" Dec 13 07:37:36.258375 kubelet[1907]: I1213 07:37:36.258268 1907 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/88163247478e2cf6abcca327d4caa98c-k8s-certs\") pod \"kube-apiserver-srv-ktue8.gb1.brightbox.com\" (UID: \"88163247478e2cf6abcca327d4caa98c\") " pod="kube-system/kube-apiserver-srv-ktue8.gb1.brightbox.com" Dec 13 07:37:36.258663 kubelet[1907]: I1213 07:37:36.258305 1907 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/88163247478e2cf6abcca327d4caa98c-usr-share-ca-certificates\") pod \"kube-apiserver-srv-ktue8.gb1.brightbox.com\" (UID: \"88163247478e2cf6abcca327d4caa98c\") " pod="kube-system/kube-apiserver-srv-ktue8.gb1.brightbox.com" Dec 13 07:37:36.258663 kubelet[1907]: I1213 07:37:36.258369 1907 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6679f3fef3323b5b73634d448e5a44ce-kubeconfig\") pod \"kube-controller-manager-srv-ktue8.gb1.brightbox.com\" (UID: \"6679f3fef3323b5b73634d448e5a44ce\") " pod="kube-system/kube-controller-manager-srv-ktue8.gb1.brightbox.com" Dec 13 07:37:36.260605 kubelet[1907]: E1213 07:37:36.260566 1907 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.78.246:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-ktue8.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.78.246:6443: connect: connection refused" interval="400ms" Dec 13 07:37:36.364263 kubelet[1907]: I1213 07:37:36.364210 1907 kubelet_node_status.go:73] "Attempting to register node" node="srv-ktue8.gb1.brightbox.com" Dec 13 07:37:36.364748 kubelet[1907]: E1213 07:37:36.364716 1907 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.78.246:6443/api/v1/nodes\": dial tcp 10.230.78.246:6443: connect: connection refused" node="srv-ktue8.gb1.brightbox.com" Dec 13 07:37:36.527844 env[1313]: time="2024-12-13T07:37:36.527205441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-ktue8.gb1.brightbox.com,Uid:6679f3fef3323b5b73634d448e5a44ce,Namespace:kube-system,Attempt:0,}" Dec 13 07:37:36.530987 env[1313]: time="2024-12-13T07:37:36.530690969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-ktue8.gb1.brightbox.com,Uid:88163247478e2cf6abcca327d4caa98c,Namespace:kube-system,Attempt:0,}" Dec 13 07:37:36.532906 env[1313]: time="2024-12-13T07:37:36.532842414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-ktue8.gb1.brightbox.com,Uid:b5cce23f7cfcb0ec8a4ee0c30ff19c66,Namespace:kube-system,Attempt:0,}" Dec 13 07:37:36.662390 kubelet[1907]: E1213 07:37:36.662326 1907 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.78.246:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-ktue8.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.78.246:6443: connect: connection refused" interval="800ms" Dec 13 07:37:36.767851 kubelet[1907]: I1213 07:37:36.767802 1907 kubelet_node_status.go:73] "Attempting to register node" node="srv-ktue8.gb1.brightbox.com" Dec 13 07:37:36.768348 kubelet[1907]: E1213 07:37:36.768314 1907 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.78.246:6443/api/v1/nodes\": dial tcp 10.230.78.246:6443: connect: connection refused" node="srv-ktue8.gb1.brightbox.com" Dec 13 07:37:36.997651 kubelet[1907]: W1213 07:37:36.997524 1907 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.230.78.246:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-ktue8.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.78.246:6443: connect: connection refused Dec 13 07:37:36.997651 kubelet[1907]: E1213 07:37:36.997645 1907 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.230.78.246:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-ktue8.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.78.246:6443: connect: connection refused Dec 13 07:37:37.096339 kubelet[1907]: W1213 07:37:37.096243 1907 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.230.78.246:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.78.246:6443: connect: connection refused Dec 13 07:37:37.096339 kubelet[1907]: E1213 07:37:37.096312 1907 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.230.78.246:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.78.246:6443: connect: connection refused Dec 13 07:37:37.126444 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2809153046.mount: Deactivated successfully. Dec 13 07:37:37.133650 env[1313]: time="2024-12-13T07:37:37.133484168Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:37:37.136521 env[1313]: time="2024-12-13T07:37:37.136422062Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:37:37.140765 env[1313]: time="2024-12-13T07:37:37.140678908Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:37:37.142321 env[1313]: time="2024-12-13T07:37:37.142277555Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:37:37.152019 env[1313]: time="2024-12-13T07:37:37.151969518Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:37:37.156640 env[1313]: time="2024-12-13T07:37:37.156602397Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:37:37.157942 env[1313]: time="2024-12-13T07:37:37.157909071Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:37:37.158805 env[1313]: time="2024-12-13T07:37:37.158767786Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:37:37.159693 env[1313]: time="2024-12-13T07:37:37.159653622Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:37:37.160589 env[1313]: time="2024-12-13T07:37:37.160530544Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:37:37.161449 env[1313]: time="2024-12-13T07:37:37.161412290Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:37:37.162340 env[1313]: time="2024-12-13T07:37:37.162295239Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:37:37.184833 kubelet[1907]: W1213 07:37:37.184509 1907 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.230.78.246:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.78.246:6443: connect: connection refused Dec 13 07:37:37.184833 kubelet[1907]: E1213 07:37:37.184615 1907 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.230.78.246:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.78.246:6443: connect: connection refused Dec 13 07:37:37.198486 env[1313]: time="2024-12-13T07:37:37.198148978Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 07:37:37.198486 env[1313]: time="2024-12-13T07:37:37.198238000Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 07:37:37.198486 env[1313]: time="2024-12-13T07:37:37.198277336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 07:37:37.200004 env[1313]: time="2024-12-13T07:37:37.199901873Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a2a06ca0efd35ab32b286d4475a7a916a4047439b53f81a1d10e22bad57851c6 pid=1948 runtime=io.containerd.runc.v2 Dec 13 07:37:37.206761 env[1313]: time="2024-12-13T07:37:37.206668871Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 07:37:37.206761 env[1313]: time="2024-12-13T07:37:37.206719988Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 07:37:37.207013 env[1313]: time="2024-12-13T07:37:37.206759801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 07:37:37.207109 env[1313]: time="2024-12-13T07:37:37.207017555Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f3a8f482405ca6e8f9ff51153bfa5bda444e586e5b110a2e573b68314bd9aaa5 pid=1971 runtime=io.containerd.runc.v2 Dec 13 07:37:37.209349 env[1313]: time="2024-12-13T07:37:37.209252757Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 07:37:37.209531 env[1313]: time="2024-12-13T07:37:37.209481331Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 07:37:37.209696 env[1313]: time="2024-12-13T07:37:37.209647333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 07:37:37.210996 env[1313]: time="2024-12-13T07:37:37.210942142Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e62903addaacb05ca31e390d9ab4fd9aace0414adc21862228164b40ef01ec60 pid=1972 runtime=io.containerd.runc.v2 Dec 13 07:37:37.234944 kubelet[1907]: W1213 07:37:37.228785 1907 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.230.78.246:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.78.246:6443: connect: connection refused Dec 13 07:37:37.234944 kubelet[1907]: E1213 07:37:37.228865 1907 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.230.78.246:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.230.78.246:6443: connect: connection refused Dec 13 07:37:37.352949 env[1313]: time="2024-12-13T07:37:37.352672278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-ktue8.gb1.brightbox.com,Uid:6679f3fef3323b5b73634d448e5a44ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"a2a06ca0efd35ab32b286d4475a7a916a4047439b53f81a1d10e22bad57851c6\"" Dec 13 07:37:37.361421 env[1313]: time="2024-12-13T07:37:37.361365579Z" level=info msg="CreateContainer within sandbox \"a2a06ca0efd35ab32b286d4475a7a916a4047439b53f81a1d10e22bad57851c6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 07:37:37.387066 env[1313]: time="2024-12-13T07:37:37.385637085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-ktue8.gb1.brightbox.com,Uid:b5cce23f7cfcb0ec8a4ee0c30ff19c66,Namespace:kube-system,Attempt:0,} returns sandbox id \"e62903addaacb05ca31e390d9ab4fd9aace0414adc21862228164b40ef01ec60\"" Dec 13 07:37:37.391417 env[1313]: time="2024-12-13T07:37:37.390076214Z" level=info msg="CreateContainer within sandbox \"a2a06ca0efd35ab32b286d4475a7a916a4047439b53f81a1d10e22bad57851c6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"cfca350670c77b2bb12ca7da4c640530f252ab453af72c526f60fa4f67aba3d1\"" Dec 13 07:37:37.391417 env[1313]: time="2024-12-13T07:37:37.390796904Z" level=info msg="StartContainer for \"cfca350670c77b2bb12ca7da4c640530f252ab453af72c526f60fa4f67aba3d1\"" Dec 13 07:37:37.392992 env[1313]: time="2024-12-13T07:37:37.392949151Z" level=info msg="CreateContainer within sandbox \"e62903addaacb05ca31e390d9ab4fd9aace0414adc21862228164b40ef01ec60\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 07:37:37.393350 env[1313]: time="2024-12-13T07:37:37.393302977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-ktue8.gb1.brightbox.com,Uid:88163247478e2cf6abcca327d4caa98c,Namespace:kube-system,Attempt:0,} returns sandbox id \"f3a8f482405ca6e8f9ff51153bfa5bda444e586e5b110a2e573b68314bd9aaa5\"" Dec 13 07:37:37.423416 env[1313]: time="2024-12-13T07:37:37.422395550Z" level=info msg="CreateContainer within sandbox \"f3a8f482405ca6e8f9ff51153bfa5bda444e586e5b110a2e573b68314bd9aaa5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 07:37:37.446450 env[1313]: time="2024-12-13T07:37:37.446382069Z" level=info msg="CreateContainer within sandbox \"e62903addaacb05ca31e390d9ab4fd9aace0414adc21862228164b40ef01ec60\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ba9ccae852d28075256808af69975ca33514f615976e2325752cf07ddb237b20\"" Dec 13 07:37:37.447449 env[1313]: time="2024-12-13T07:37:37.447416289Z" level=info msg="StartContainer for \"ba9ccae852d28075256808af69975ca33514f615976e2325752cf07ddb237b20\"" Dec 13 07:37:37.466908 env[1313]: time="2024-12-13T07:37:37.462394731Z" level=info msg="CreateContainer within sandbox \"f3a8f482405ca6e8f9ff51153bfa5bda444e586e5b110a2e573b68314bd9aaa5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"63992b581bf0e8174830f299dfc9a660bd525d90d10cc231ca351d979538e168\"" Dec 13 07:37:37.466908 env[1313]: time="2024-12-13T07:37:37.463961624Z" level=info msg="StartContainer for \"63992b581bf0e8174830f299dfc9a660bd525d90d10cc231ca351d979538e168\"" Dec 13 07:37:37.467201 kubelet[1907]: E1213 07:37:37.463679 1907 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.78.246:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-ktue8.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.78.246:6443: connect: connection refused" interval="1.6s" Dec 13 07:37:37.536701 env[1313]: time="2024-12-13T07:37:37.536631332Z" level=info msg="StartContainer for \"cfca350670c77b2bb12ca7da4c640530f252ab453af72c526f60fa4f67aba3d1\" returns successfully" Dec 13 07:37:37.572225 kubelet[1907]: I1213 07:37:37.572178 1907 kubelet_node_status.go:73] "Attempting to register node" node="srv-ktue8.gb1.brightbox.com" Dec 13 07:37:37.572794 kubelet[1907]: E1213 07:37:37.572745 1907 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.230.78.246:6443/api/v1/nodes\": dial tcp 10.230.78.246:6443: connect: connection refused" node="srv-ktue8.gb1.brightbox.com" Dec 13 07:37:37.611998 env[1313]: time="2024-12-13T07:37:37.608067033Z" level=info msg="StartContainer for \"ba9ccae852d28075256808af69975ca33514f615976e2325752cf07ddb237b20\" returns successfully" Dec 13 07:37:37.676485 env[1313]: time="2024-12-13T07:37:37.676406905Z" level=info msg="StartContainer for \"63992b581bf0e8174830f299dfc9a660bd525d90d10cc231ca351d979538e168\" returns successfully" Dec 13 07:37:38.041202 kubelet[1907]: E1213 07:37:38.041025 1907 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.230.78.246:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.230.78.246:6443: connect: connection refused Dec 13 07:37:39.176221 kubelet[1907]: I1213 07:37:39.176176 1907 kubelet_node_status.go:73] "Attempting to register node" node="srv-ktue8.gb1.brightbox.com" Dec 13 07:37:40.476603 kubelet[1907]: I1213 07:37:40.476530 1907 kubelet_node_status.go:76] "Successfully registered node" node="srv-ktue8.gb1.brightbox.com" Dec 13 07:37:40.524504 kubelet[1907]: E1213 07:37:40.524444 1907 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{srv-ktue8.gb1.brightbox.com.1810ac7092aa638d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-ktue8.gb1.brightbox.com,UID:srv-ktue8.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-ktue8.gb1.brightbox.com,},FirstTimestamp:2024-12-13 07:37:36.034595725 +0000 UTC m=+0.460720184,LastTimestamp:2024-12-13 07:37:36.034595725 +0000 UTC m=+0.460720184,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-ktue8.gb1.brightbox.com,}" Dec 13 07:37:41.027577 kubelet[1907]: I1213 07:37:41.027347 1907 apiserver.go:52] "Watching apiserver" Dec 13 07:37:41.055982 kubelet[1907]: I1213 07:37:41.055918 1907 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 07:37:43.254656 systemd[1]: Reloading. Dec 13 07:37:43.371121 /usr/lib/systemd/system-generators/torcx-generator[2202]: time="2024-12-13T07:37:43Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 07:37:43.371172 /usr/lib/systemd/system-generators/torcx-generator[2202]: time="2024-12-13T07:37:43Z" level=info msg="torcx already run" Dec 13 07:37:43.507309 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 07:37:43.507975 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 07:37:43.534960 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 07:37:43.691437 systemd[1]: Stopping kubelet.service... Dec 13 07:37:43.710244 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 07:37:43.710968 systemd[1]: Stopped kubelet.service. Dec 13 07:37:43.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:37:43.715913 kernel: kauditd_printk_skb: 42 callbacks suppressed Dec 13 07:37:43.716080 kernel: audit: type=1131 audit(1734075463.710:226): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:37:43.718579 systemd[1]: Starting kubelet.service... Dec 13 07:37:44.670823 systemd[1]: Started kubelet.service. Dec 13 07:37:44.682566 kernel: audit: type=1130 audit(1734075464.670:227): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:37:44.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:37:44.823216 kubelet[2265]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 07:37:44.823216 kubelet[2265]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 07:37:44.823216 kubelet[2265]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 07:37:44.823216 kubelet[2265]: I1213 07:37:44.822748 2265 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 07:37:44.830576 kubelet[2265]: I1213 07:37:44.830112 2265 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 07:37:44.830576 kubelet[2265]: I1213 07:37:44.830144 2265 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 07:37:44.830576 kubelet[2265]: I1213 07:37:44.830410 2265 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 07:37:44.833432 kubelet[2265]: I1213 07:37:44.832736 2265 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 07:37:44.859028 kubelet[2265]: I1213 07:37:44.858983 2265 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 07:37:44.913855 kubelet[2265]: I1213 07:37:44.913547 2265 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 07:37:44.914788 kubelet[2265]: I1213 07:37:44.914297 2265 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 07:37:44.914788 kubelet[2265]: I1213 07:37:44.914651 2265 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 07:37:44.914788 kubelet[2265]: I1213 07:37:44.914695 2265 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 07:37:44.914788 kubelet[2265]: I1213 07:37:44.914713 2265 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 07:37:44.914788 kubelet[2265]: I1213 07:37:44.914771 2265 state_mem.go:36] "Initialized new in-memory state store" Dec 13 07:37:44.916990 kubelet[2265]: I1213 07:37:44.916955 2265 kubelet.go:396] "Attempting to sync node with API server" Dec 13 07:37:44.928971 kubelet[2265]: I1213 07:37:44.927535 2265 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 07:37:44.928971 kubelet[2265]: I1213 07:37:44.927593 2265 kubelet.go:312] "Adding apiserver pod source" Dec 13 07:37:44.928971 kubelet[2265]: I1213 07:37:44.927620 2265 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 07:37:44.944014 kubelet[2265]: I1213 07:37:44.942627 2265 apiserver.go:52] "Watching apiserver" Dec 13 07:37:44.946449 kubelet[2265]: I1213 07:37:44.945048 2265 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 07:37:44.946449 kubelet[2265]: I1213 07:37:44.945337 2265 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 07:37:44.946449 kubelet[2265]: I1213 07:37:44.946119 2265 server.go:1256] "Started kubelet" Dec 13 07:37:44.959000 audit[2265]: AVC avc: denied { mac_admin } for pid=2265 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:37:44.966297 kubelet[2265]: I1213 07:37:44.964060 2265 kubelet.go:1417] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Dec 13 07:37:44.966297 kubelet[2265]: I1213 07:37:44.964140 2265 kubelet.go:1421] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Dec 13 07:37:44.966297 kubelet[2265]: I1213 07:37:44.964247 2265 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 07:37:44.959000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 07:37:44.971124 kernel: audit: type=1400 audit(1734075464.959:228): avc: denied { mac_admin } for pid=2265 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:37:44.971238 kernel: audit: type=1401 audit(1734075464.959:228): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 07:37:44.981919 kernel: audit: type=1300 audit(1734075464.959:228): arch=c000003e syscall=188 success=no exit=-22 a0=c000b97380 a1=c000a070f8 a2=c000b97350 a3=25 items=0 ppid=1 pid=2265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:44.959000 audit[2265]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000b97380 a1=c000a070f8 a2=c000b97350 a3=25 items=0 ppid=1 pid=2265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:44.982300 kubelet[2265]: I1213 07:37:44.978501 2265 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 07:37:44.982300 kubelet[2265]: I1213 07:37:44.980209 2265 server.go:461] "Adding debug handlers to kubelet server" Dec 13 07:37:44.985182 kubelet[2265]: I1213 07:37:44.983857 2265 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 07:37:44.987501 kubelet[2265]: I1213 07:37:44.985908 2265 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 07:37:44.987501 kubelet[2265]: I1213 07:37:44.986300 2265 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 07:37:44.987778 kubelet[2265]: I1213 07:37:44.987751 2265 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 07:37:44.988967 kubelet[2265]: I1213 07:37:44.988942 2265 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 07:37:44.991972 kubelet[2265]: I1213 07:37:44.991371 2265 factory.go:221] Registration of the systemd container factory successfully Dec 13 07:37:44.991972 kubelet[2265]: I1213 07:37:44.991643 2265 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 07:37:45.003863 kernel: audit: type=1327 audit(1734075464.959:228): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 07:37:44.959000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 07:37:44.963000 audit[2265]: AVC avc: denied { mac_admin } for pid=2265 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:37:45.009424 kubelet[2265]: I1213 07:37:45.007294 2265 factory.go:221] Registration of the containerd container factory successfully Dec 13 07:37:45.012454 kernel: audit: type=1400 audit(1734075464.963:229): avc: denied { mac_admin } for pid=2265 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:37:45.012567 kubelet[2265]: E1213 07:37:45.009783 2265 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 07:37:44.963000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 07:37:45.017737 kernel: audit: type=1401 audit(1734075464.963:229): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 07:37:44.963000 audit[2265]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000bba6c0 a1=c000a07110 a2=c000b97410 a3=25 items=0 ppid=1 pid=2265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:45.025940 kernel: audit: type=1300 audit(1734075464.963:229): arch=c000003e syscall=188 success=no exit=-22 a0=c000bba6c0 a1=c000a07110 a2=c000b97410 a3=25 items=0 ppid=1 pid=2265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:44.963000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 07:37:45.032177 kubelet[2265]: I1213 07:37:45.032116 2265 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 07:37:45.032925 kernel: audit: type=1327 audit(1734075464.963:229): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 07:37:45.039647 kubelet[2265]: I1213 07:37:45.039621 2265 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 07:37:45.039850 kubelet[2265]: I1213 07:37:45.039824 2265 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 07:37:45.040037 kubelet[2265]: I1213 07:37:45.040012 2265 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 07:37:45.040302 kubelet[2265]: E1213 07:37:45.040251 2265 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 07:37:45.108254 kubelet[2265]: I1213 07:37:45.108200 2265 kubelet_node_status.go:73] "Attempting to register node" node="srv-ktue8.gb1.brightbox.com" Dec 13 07:37:45.117299 kubelet[2265]: I1213 07:37:45.117263 2265 kubelet_node_status.go:112] "Node was previously registered" node="srv-ktue8.gb1.brightbox.com" Dec 13 07:37:45.117474 kubelet[2265]: I1213 07:37:45.117384 2265 kubelet_node_status.go:76] "Successfully registered node" node="srv-ktue8.gb1.brightbox.com" Dec 13 07:37:45.142676 kubelet[2265]: E1213 07:37:45.142629 2265 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 07:37:45.166453 kubelet[2265]: I1213 07:37:45.166409 2265 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 07:37:45.166738 kubelet[2265]: I1213 07:37:45.166714 2265 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 07:37:45.166931 kubelet[2265]: I1213 07:37:45.166907 2265 state_mem.go:36] "Initialized new in-memory state store" Dec 13 07:37:45.167343 kubelet[2265]: I1213 07:37:45.167318 2265 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 07:37:45.167515 kubelet[2265]: I1213 07:37:45.167488 2265 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 07:37:45.167662 kubelet[2265]: I1213 07:37:45.167637 2265 policy_none.go:49] "None policy: Start" Dec 13 07:37:45.168732 kubelet[2265]: I1213 07:37:45.168691 2265 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 07:37:45.168915 kubelet[2265]: I1213 07:37:45.168874 2265 state_mem.go:35] "Initializing new in-memory state store" Dec 13 07:37:45.169374 kubelet[2265]: I1213 07:37:45.169348 2265 state_mem.go:75] "Updated machine memory state" Dec 13 07:37:45.175953 kubelet[2265]: I1213 07:37:45.175926 2265 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 07:37:45.175000 audit[2265]: AVC avc: denied { mac_admin } for pid=2265 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:37:45.175000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 07:37:45.175000 audit[2265]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c001332a50 a1=c0012b9f68 a2=c001332a20 a3=25 items=0 ppid=1 pid=2265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:45.175000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 07:37:45.176700 kubelet[2265]: I1213 07:37:45.176276 2265 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Dec 13 07:37:45.177505 kubelet[2265]: I1213 07:37:45.177479 2265 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 07:37:45.344305 kubelet[2265]: I1213 07:37:45.344232 2265 topology_manager.go:215] "Topology Admit Handler" podUID="88163247478e2cf6abcca327d4caa98c" podNamespace="kube-system" podName="kube-apiserver-srv-ktue8.gb1.brightbox.com" Dec 13 07:37:45.344888 kubelet[2265]: I1213 07:37:45.344851 2265 topology_manager.go:215] "Topology Admit Handler" podUID="6679f3fef3323b5b73634d448e5a44ce" podNamespace="kube-system" podName="kube-controller-manager-srv-ktue8.gb1.brightbox.com" Dec 13 07:37:45.345213 kubelet[2265]: I1213 07:37:45.345181 2265 topology_manager.go:215] "Topology Admit Handler" podUID="b5cce23f7cfcb0ec8a4ee0c30ff19c66" podNamespace="kube-system" podName="kube-scheduler-srv-ktue8.gb1.brightbox.com" Dec 13 07:37:45.360805 kubelet[2265]: W1213 07:37:45.360780 2265 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 07:37:45.362373 kubelet[2265]: W1213 07:37:45.362349 2265 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 07:37:45.363121 kubelet[2265]: W1213 07:37:45.363098 2265 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 07:37:45.386562 kubelet[2265]: I1213 07:37:45.386523 2265 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 07:37:45.391003 kubelet[2265]: I1213 07:37:45.390978 2265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6679f3fef3323b5b73634d448e5a44ce-kubeconfig\") pod \"kube-controller-manager-srv-ktue8.gb1.brightbox.com\" (UID: \"6679f3fef3323b5b73634d448e5a44ce\") " pod="kube-system/kube-controller-manager-srv-ktue8.gb1.brightbox.com" Dec 13 07:37:45.391198 kubelet[2265]: I1213 07:37:45.391173 2265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6679f3fef3323b5b73634d448e5a44ce-flexvolume-dir\") pod \"kube-controller-manager-srv-ktue8.gb1.brightbox.com\" (UID: \"6679f3fef3323b5b73634d448e5a44ce\") " pod="kube-system/kube-controller-manager-srv-ktue8.gb1.brightbox.com" Dec 13 07:37:45.391367 kubelet[2265]: I1213 07:37:45.391345 2265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6679f3fef3323b5b73634d448e5a44ce-k8s-certs\") pod \"kube-controller-manager-srv-ktue8.gb1.brightbox.com\" (UID: \"6679f3fef3323b5b73634d448e5a44ce\") " pod="kube-system/kube-controller-manager-srv-ktue8.gb1.brightbox.com" Dec 13 07:37:45.391570 kubelet[2265]: I1213 07:37:45.391547 2265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6679f3fef3323b5b73634d448e5a44ce-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-ktue8.gb1.brightbox.com\" (UID: \"6679f3fef3323b5b73634d448e5a44ce\") " pod="kube-system/kube-controller-manager-srv-ktue8.gb1.brightbox.com" Dec 13 07:37:45.391719 kubelet[2265]: I1213 07:37:45.391696 2265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b5cce23f7cfcb0ec8a4ee0c30ff19c66-kubeconfig\") pod \"kube-scheduler-srv-ktue8.gb1.brightbox.com\" (UID: \"b5cce23f7cfcb0ec8a4ee0c30ff19c66\") " pod="kube-system/kube-scheduler-srv-ktue8.gb1.brightbox.com" Dec 13 07:37:45.391916 kubelet[2265]: I1213 07:37:45.391894 2265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/88163247478e2cf6abcca327d4caa98c-ca-certs\") pod \"kube-apiserver-srv-ktue8.gb1.brightbox.com\" (UID: \"88163247478e2cf6abcca327d4caa98c\") " pod="kube-system/kube-apiserver-srv-ktue8.gb1.brightbox.com" Dec 13 07:37:45.392070 kubelet[2265]: I1213 07:37:45.392048 2265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/88163247478e2cf6abcca327d4caa98c-k8s-certs\") pod \"kube-apiserver-srv-ktue8.gb1.brightbox.com\" (UID: \"88163247478e2cf6abcca327d4caa98c\") " pod="kube-system/kube-apiserver-srv-ktue8.gb1.brightbox.com" Dec 13 07:37:45.392252 kubelet[2265]: I1213 07:37:45.392230 2265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/88163247478e2cf6abcca327d4caa98c-usr-share-ca-certificates\") pod \"kube-apiserver-srv-ktue8.gb1.brightbox.com\" (UID: \"88163247478e2cf6abcca327d4caa98c\") " pod="kube-system/kube-apiserver-srv-ktue8.gb1.brightbox.com" Dec 13 07:37:45.392400 kubelet[2265]: I1213 07:37:45.392376 2265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6679f3fef3323b5b73634d448e5a44ce-ca-certs\") pod \"kube-controller-manager-srv-ktue8.gb1.brightbox.com\" (UID: \"6679f3fef3323b5b73634d448e5a44ce\") " pod="kube-system/kube-controller-manager-srv-ktue8.gb1.brightbox.com" Dec 13 07:37:46.155556 kubelet[2265]: I1213 07:37:46.155497 2265 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-ktue8.gb1.brightbox.com" podStartSLOduration=1.155412189 podStartE2EDuration="1.155412189s" podCreationTimestamp="2024-12-13 07:37:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 07:37:46.130005446 +0000 UTC m=+1.407299785" watchObservedRunningTime="2024-12-13 07:37:46.155412189 +0000 UTC m=+1.432706535" Dec 13 07:37:46.183293 kubelet[2265]: I1213 07:37:46.183236 2265 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-ktue8.gb1.brightbox.com" podStartSLOduration=1.1831788030000001 podStartE2EDuration="1.183178803s" podCreationTimestamp="2024-12-13 07:37:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 07:37:46.156906164 +0000 UTC m=+1.434200511" watchObservedRunningTime="2024-12-13 07:37:46.183178803 +0000 UTC m=+1.460473144" Dec 13 07:37:46.227861 kubelet[2265]: I1213 07:37:46.227810 2265 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-ktue8.gb1.brightbox.com" podStartSLOduration=1.227752215 podStartE2EDuration="1.227752215s" podCreationTimestamp="2024-12-13 07:37:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 07:37:46.184315481 +0000 UTC m=+1.461609817" watchObservedRunningTime="2024-12-13 07:37:46.227752215 +0000 UTC m=+1.505046556" Dec 13 07:37:50.819157 sudo[1536]: pam_unix(sudo:session): session closed for user root Dec 13 07:37:50.818000 audit[1536]: USER_END pid=1536 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 07:37:50.822860 kernel: kauditd_printk_skb: 4 callbacks suppressed Dec 13 07:37:50.822982 kernel: audit: type=1106 audit(1734075470.818:231): pid=1536 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 07:37:50.824000 audit[1536]: CRED_DISP pid=1536 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 07:37:50.834933 kernel: audit: type=1104 audit(1734075470.824:232): pid=1536 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 07:37:50.975812 sshd[1532]: pam_unix(sshd:session): session closed for user core Dec 13 07:37:50.978000 audit[1532]: USER_END pid=1532 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:37:50.987845 systemd[1]: sshd@8-10.230.78.246:22-139.178.89.65:59394.service: Deactivated successfully. Dec 13 07:37:50.988388 kernel: audit: type=1106 audit(1734075470.978:233): pid=1532 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:37:50.978000 audit[1532]: CRED_DISP pid=1532 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:37:50.998754 kernel: audit: type=1104 audit(1734075470.978:234): pid=1532 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:37:50.997215 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 07:37:50.997330 systemd-logind[1301]: Session 9 logged out. Waiting for processes to exit. Dec 13 07:37:50.999862 systemd-logind[1301]: Removed session 9. Dec 13 07:37:50.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.230.78.246:22-139.178.89.65:59394 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:37:51.007000 kernel: audit: type=1131 audit(1734075470.987:235): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.230.78.246:22-139.178.89.65:59394 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:37:56.023709 kubelet[2265]: I1213 07:37:56.023658 2265 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 07:37:56.025064 env[1313]: time="2024-12-13T07:37:56.024974108Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 07:37:56.025724 kubelet[2265]: I1213 07:37:56.025700 2265 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 07:37:56.928899 kubelet[2265]: I1213 07:37:56.928837 2265 topology_manager.go:215] "Topology Admit Handler" podUID="e4a71092-3c4e-49b3-a58c-6109e0ad9921" podNamespace="kube-system" podName="kube-proxy-bhdz6" Dec 13 07:37:57.068555 kubelet[2265]: I1213 07:37:57.068508 2265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e4a71092-3c4e-49b3-a58c-6109e0ad9921-xtables-lock\") pod \"kube-proxy-bhdz6\" (UID: \"e4a71092-3c4e-49b3-a58c-6109e0ad9921\") " pod="kube-system/kube-proxy-bhdz6" Dec 13 07:37:57.069256 kubelet[2265]: I1213 07:37:57.069230 2265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pkbq\" (UniqueName: \"kubernetes.io/projected/e4a71092-3c4e-49b3-a58c-6109e0ad9921-kube-api-access-2pkbq\") pod \"kube-proxy-bhdz6\" (UID: \"e4a71092-3c4e-49b3-a58c-6109e0ad9921\") " pod="kube-system/kube-proxy-bhdz6" Dec 13 07:37:57.069428 kubelet[2265]: I1213 07:37:57.069403 2265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e4a71092-3c4e-49b3-a58c-6109e0ad9921-kube-proxy\") pod \"kube-proxy-bhdz6\" (UID: \"e4a71092-3c4e-49b3-a58c-6109e0ad9921\") " pod="kube-system/kube-proxy-bhdz6" Dec 13 07:37:57.069586 kubelet[2265]: I1213 07:37:57.069563 2265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e4a71092-3c4e-49b3-a58c-6109e0ad9921-lib-modules\") pod \"kube-proxy-bhdz6\" (UID: \"e4a71092-3c4e-49b3-a58c-6109e0ad9921\") " pod="kube-system/kube-proxy-bhdz6" Dec 13 07:37:57.114204 kubelet[2265]: I1213 07:37:57.114151 2265 topology_manager.go:215] "Topology Admit Handler" podUID="2b797b24-d0c5-4fe8-8b4f-6bb3dd413c02" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-dd5wb" Dec 13 07:37:57.170204 kubelet[2265]: I1213 07:37:57.170158 2265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2b797b24-d0c5-4fe8-8b4f-6bb3dd413c02-var-lib-calico\") pod \"tigera-operator-c7ccbd65-dd5wb\" (UID: \"2b797b24-d0c5-4fe8-8b4f-6bb3dd413c02\") " pod="tigera-operator/tigera-operator-c7ccbd65-dd5wb" Dec 13 07:37:57.170596 kubelet[2265]: I1213 07:37:57.170572 2265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k4v5\" (UniqueName: \"kubernetes.io/projected/2b797b24-d0c5-4fe8-8b4f-6bb3dd413c02-kube-api-access-6k4v5\") pod \"tigera-operator-c7ccbd65-dd5wb\" (UID: \"2b797b24-d0c5-4fe8-8b4f-6bb3dd413c02\") " pod="tigera-operator/tigera-operator-c7ccbd65-dd5wb" Dec 13 07:37:57.238553 env[1313]: time="2024-12-13T07:37:57.237740372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bhdz6,Uid:e4a71092-3c4e-49b3-a58c-6109e0ad9921,Namespace:kube-system,Attempt:0,}" Dec 13 07:37:57.265176 env[1313]: time="2024-12-13T07:37:57.265028855Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 07:37:57.265176 env[1313]: time="2024-12-13T07:37:57.265113770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 07:37:57.265566 env[1313]: time="2024-12-13T07:37:57.265143474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 07:37:57.266027 env[1313]: time="2024-12-13T07:37:57.265970930Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bd5ddebace7237e20d846207ea7f3551a8f447b7cf7d3c4651c59f33650bf6a3 pid=2353 runtime=io.containerd.runc.v2 Dec 13 07:37:57.348892 env[1313]: time="2024-12-13T07:37:57.348807849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bhdz6,Uid:e4a71092-3c4e-49b3-a58c-6109e0ad9921,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd5ddebace7237e20d846207ea7f3551a8f447b7cf7d3c4651c59f33650bf6a3\"" Dec 13 07:37:57.358204 env[1313]: time="2024-12-13T07:37:57.358142462Z" level=info msg="CreateContainer within sandbox \"bd5ddebace7237e20d846207ea7f3551a8f447b7cf7d3c4651c59f33650bf6a3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 07:37:57.379615 env[1313]: time="2024-12-13T07:37:57.379517333Z" level=info msg="CreateContainer within sandbox \"bd5ddebace7237e20d846207ea7f3551a8f447b7cf7d3c4651c59f33650bf6a3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"86c6ba4dcc36eee75c6987a80ef87577dcaf31dc38a73148d233fad460a797f2\"" Dec 13 07:37:57.382195 env[1313]: time="2024-12-13T07:37:57.382159491Z" level=info msg="StartContainer for \"86c6ba4dcc36eee75c6987a80ef87577dcaf31dc38a73148d233fad460a797f2\"" Dec 13 07:37:57.419562 env[1313]: time="2024-12-13T07:37:57.419492857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-dd5wb,Uid:2b797b24-d0c5-4fe8-8b4f-6bb3dd413c02,Namespace:tigera-operator,Attempt:0,}" Dec 13 07:37:57.460055 env[1313]: time="2024-12-13T07:37:57.454707689Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 07:37:57.460055 env[1313]: time="2024-12-13T07:37:57.454789311Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 07:37:57.460055 env[1313]: time="2024-12-13T07:37:57.454805548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 07:37:57.460055 env[1313]: time="2024-12-13T07:37:57.455413575Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4f8eea62bebfbe5031aa77d2fcedf067045220a8dbc826c9d09e2590b5350068 pid=2417 runtime=io.containerd.runc.v2 Dec 13 07:37:57.479990 env[1313]: time="2024-12-13T07:37:57.479926448Z" level=info msg="StartContainer for \"86c6ba4dcc36eee75c6987a80ef87577dcaf31dc38a73148d233fad460a797f2\" returns successfully" Dec 13 07:37:57.558487 env[1313]: time="2024-12-13T07:37:57.557344393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-dd5wb,Uid:2b797b24-d0c5-4fe8-8b4f-6bb3dd413c02,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4f8eea62bebfbe5031aa77d2fcedf067045220a8dbc826c9d09e2590b5350068\"" Dec 13 07:37:57.564705 env[1313]: time="2024-12-13T07:37:57.564646138Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 07:37:57.815000 audit[2488]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2488 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 07:37:57.822918 kernel: audit: type=1325 audit(1734075477.815:236): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2488 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 07:37:57.815000 audit[2488]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdccb76be0 a2=0 a3=7ffdccb76bcc items=0 ppid=2405 pid=2488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:57.815000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 13 07:37:57.837000 kernel: audit: type=1300 audit(1734075477.815:236): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdccb76be0 a2=0 a3=7ffdccb76bcc items=0 ppid=2405 pid=2488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:57.837119 kernel: audit: type=1327 audit(1734075477.815:236): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 13 07:37:57.824000 audit[2489]: NETFILTER_CFG table=nat:39 family=2 entries=1 op=nft_register_chain pid=2489 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 07:37:57.841143 kernel: audit: type=1325 audit(1734075477.824:237): table=nat:39 family=2 entries=1 op=nft_register_chain pid=2489 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 07:37:57.824000 audit[2489]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffece09e630 a2=0 a3=7ffece09e61c items=0 ppid=2405 pid=2489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:57.824000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 13 07:37:57.855918 kernel: audit: type=1300 audit(1734075477.824:237): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffece09e630 a2=0 a3=7ffece09e61c items=0 ppid=2405 pid=2489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:57.856037 kernel: audit: type=1327 audit(1734075477.824:237): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 13 07:37:57.824000 audit[2490]: NETFILTER_CFG table=mangle:40 family=10 entries=1 op=nft_register_chain pid=2490 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 07:37:57.824000 audit[2490]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff5f3a5860 a2=0 a3=93df4e5c14978690 items=0 ppid=2405 pid=2490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:57.867116 kernel: audit: type=1325 audit(1734075477.824:238): table=mangle:40 family=10 entries=1 op=nft_register_chain pid=2490 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 07:37:57.867220 kernel: audit: type=1300 audit(1734075477.824:238): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff5f3a5860 a2=0 a3=93df4e5c14978690 items=0 ppid=2405 pid=2490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:57.867285 kernel: audit: type=1327 audit(1734075477.824:238): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 13 07:37:57.824000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 13 07:37:57.826000 audit[2491]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=2491 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 07:37:57.874651 kernel: audit: type=1325 audit(1734075477.826:239): table=filter:41 family=2 entries=1 op=nft_register_chain pid=2491 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 07:37:57.826000 audit[2491]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffef72a8d70 a2=0 a3=7ffef72a8d5c items=0 ppid=2405 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:57.826000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 13 07:37:57.843000 audit[2492]: NETFILTER_CFG table=nat:42 family=10 entries=1 op=nft_register_chain pid=2492 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 07:37:57.843000 audit[2492]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdc3a204d0 a2=0 a3=7ffdc3a204bc items=0 ppid=2405 pid=2492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:57.843000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 13 07:37:57.843000 audit[2493]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=2493 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 07:37:57.843000 audit[2493]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeaa5cfaf0 a2=0 a3=7ffeaa5cfadc items=0 ppid=2405 pid=2493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:57.843000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 13 07:37:57.941000 audit[2494]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2494 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 07:37:57.941000 audit[2494]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe239769d0 a2=0 a3=7ffe239769bc items=0 ppid=2405 pid=2494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:57.941000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 13 07:37:57.947000 audit[2496]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2496 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 07:37:57.947000 audit[2496]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff7918b790 a2=0 a3=7fff7918b77c items=0 ppid=2405 pid=2496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:57.947000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 13 07:37:57.953000 audit[2499]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2499 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 07:37:57.953000 audit[2499]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff57c13f20 a2=0 a3=7fff57c13f0c items=0 ppid=2405 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:57.953000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 13 07:37:57.955000 audit[2500]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2500 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 07:37:57.955000 audit[2500]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc586a17a0 a2=0 a3=7ffc586a178c items=0 ppid=2405 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:57.955000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 13 07:37:57.960000 audit[2502]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2502 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 07:37:57.960000 audit[2502]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff09f559e0 a2=0 a3=7fff09f559cc items=0 ppid=2405 pid=2502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:57.960000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 13 07:37:57.962000 audit[2503]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2503 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 07:37:57.962000 audit[2503]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd87ad2590 a2=0 a3=7ffd87ad257c items=0 ppid=2405 pid=2503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:57.962000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 13 07:37:57.967000 audit[2505]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2505 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 07:37:57.967000 audit[2505]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe1710a980 a2=0 a3=7ffe1710a96c items=0 ppid=2405 pid=2505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:57.967000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 13 07:37:57.973000 audit[2508]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2508 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 07:37:57.973000 audit[2508]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffecc7de800 a2=0 a3=7ffecc7de7ec items=0 ppid=2405 pid=2508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:57.973000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 13 07:37:57.976000 audit[2509]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2509 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 07:37:57.976000 audit[2509]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc65812370 a2=0 a3=7ffc6581235c items=0 ppid=2405 pid=2509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:57.976000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 13 07:37:57.980000 audit[2511]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2511 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 07:37:57.980000 audit[2511]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc81f1bc70 a2=0 a3=7ffc81f1bc5c items=0 ppid=2405 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:57.980000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 13 07:37:57.982000 audit[2512]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2512 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 07:37:57.982000 audit[2512]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff4f9a4ef0 a2=0 a3=7fff4f9a4edc items=0 ppid=2405 pid=2512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:57.982000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 13 07:37:57.986000 audit[2514]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2514 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 07:37:57.986000 audit[2514]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff186f32b0 a2=0 a3=7fff186f329c items=0 ppid=2405 pid=2514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:57.986000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 13 07:37:57.993000 audit[2517]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2517 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 07:37:57.993000 audit[2517]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd01e4fae0 a2=0 a3=7ffd01e4facc items=0 ppid=2405 pid=2517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:57.993000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 13 07:37:57.998000 audit[2520]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2520 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 07:37:57.998000 audit[2520]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd7f902f40 a2=0 a3=7ffd7f902f2c items=0 ppid=2405 pid=2520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:57.998000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 13 07:37:58.000000 audit[2521]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2521 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 07:37:58.000000 audit[2521]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffcc80e95c0 a2=0 a3=7ffcc80e95ac items=0 ppid=2405 pid=2521 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:58.000000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 13 07:37:58.005000 audit[2523]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2523 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 07:37:58.005000 audit[2523]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffd2dc52280 a2=0 a3=7ffd2dc5226c items=0 ppid=2405 pid=2523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:58.005000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 07:37:58.012000 audit[2526]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2526 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 07:37:58.012000 audit[2526]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdd7709860 a2=0 a3=7ffdd770984c items=0 ppid=2405 pid=2526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:58.012000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 07:37:58.014000 audit[2527]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2527 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 07:37:58.014000 audit[2527]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe44ec64c0 a2=0 a3=7ffe44ec64ac items=0 ppid=2405 pid=2527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:58.014000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 13 07:37:58.018000 audit[2529]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2529 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 07:37:58.018000 audit[2529]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffd24266b50 a2=0 a3=7ffd24266b3c items=0 ppid=2405 pid=2529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:58.018000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 13 07:37:58.056000 audit[2535]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2535 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 07:37:58.056000 audit[2535]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7fff0e5ac040 a2=0 a3=7fff0e5ac02c items=0 ppid=2405 pid=2535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:58.056000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 07:37:58.074000 audit[2535]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2535 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 07:37:58.074000 audit[2535]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7fff0e5ac040 a2=0 a3=7fff0e5ac02c items=0 ppid=2405 pid=2535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:58.074000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 07:37:58.077000 audit[2540]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2540 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 07:37:58.077000 audit[2540]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffcd9660ac0 a2=0 a3=7ffcd9660aac items=0 ppid=2405 pid=2540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:58.077000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 13 07:37:58.080000 audit[2542]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2542 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 07:37:58.080000 audit[2542]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd65bb0100 a2=0 a3=7ffd65bb00ec items=0 ppid=2405 pid=2542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:58.080000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 13 07:37:58.086000 audit[2545]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2545 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 07:37:58.086000 audit[2545]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe3bc73c90 a2=0 a3=7ffe3bc73c7c items=0 ppid=2405 pid=2545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:58.086000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 13 07:37:58.088000 audit[2546]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2546 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 07:37:58.088000 audit[2546]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdde90c270 a2=0 a3=7ffdde90c25c items=0 ppid=2405 pid=2546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:58.088000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 13 07:37:58.091000 audit[2548]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2548 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 07:37:58.091000 audit[2548]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffdbe009e70 a2=0 a3=7ffdbe009e5c items=0 ppid=2405 pid=2548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:58.091000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 13 07:37:58.093000 audit[2549]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2549 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 07:37:58.093000 audit[2549]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeac106d90 a2=0 a3=7ffeac106d7c items=0 ppid=2405 pid=2549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:58.093000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 13 07:37:58.095000 audit[2551]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2551 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 07:37:58.095000 audit[2551]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff68a72e60 a2=0 a3=7fff68a72e4c items=0 ppid=2405 pid=2551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:58.095000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 13 07:37:58.103000 audit[2554]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2554 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 07:37:58.103000 audit[2554]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fffbd8a6dc0 a2=0 a3=7fffbd8a6dac items=0 ppid=2405 pid=2554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:58.103000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 13 07:37:58.105000 audit[2555]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2555 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 07:37:58.105000 audit[2555]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd807aca00 a2=0 a3=7ffd807ac9ec items=0 ppid=2405 pid=2555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:58.105000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 13 07:37:58.110000 audit[2557]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2557 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 07:37:58.110000 audit[2557]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe784ade50 a2=0 a3=7ffe784ade3c items=0 ppid=2405 pid=2557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:58.110000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 13 07:37:58.113000 audit[2558]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2558 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 07:37:58.113000 audit[2558]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeb55fd8e0 a2=0 a3=7ffeb55fd8cc items=0 ppid=2405 pid=2558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:58.113000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 13 07:37:58.120000 audit[2560]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2560 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 07:37:58.120000 audit[2560]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffd1fcb750 a2=0 a3=7fffd1fcb73c items=0 ppid=2405 pid=2560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:58.120000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 13 07:37:58.131000 audit[2563]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2563 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 07:37:58.131000 audit[2563]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe2c1334f0 a2=0 a3=7ffe2c1334dc items=0 ppid=2405 pid=2563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:58.131000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 13 07:37:58.138000 audit[2566]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2566 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 07:37:58.138000 audit[2566]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe0fbff190 a2=0 a3=7ffe0fbff17c items=0 ppid=2405 pid=2566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:58.138000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 13 07:37:58.140000 audit[2567]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2567 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 07:37:58.140000 audit[2567]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe8eac9580 a2=0 a3=7ffe8eac956c items=0 ppid=2405 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:58.140000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 13 07:37:58.152000 audit[2569]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2569 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 07:37:58.152000 audit[2569]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7fff24fd0b50 a2=0 a3=7fff24fd0b3c items=0 ppid=2405 pid=2569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:58.152000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 07:37:58.158000 audit[2572]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2572 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 07:37:58.158000 audit[2572]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffe19975da0 a2=0 a3=7ffe19975d8c items=0 ppid=2405 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:58.158000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 07:37:58.160000 audit[2573]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2573 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 07:37:58.160000 audit[2573]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffefbaca9e0 a2=0 a3=7ffefbaca9cc items=0 ppid=2405 pid=2573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:58.160000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 13 07:37:58.165000 audit[2575]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2575 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 07:37:58.165000 audit[2575]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffdf8c6f910 a2=0 a3=7ffdf8c6f8fc items=0 ppid=2405 pid=2575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:58.165000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 13 07:37:58.167000 audit[2576]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2576 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 07:37:58.167000 audit[2576]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcf492f230 a2=0 a3=7ffcf492f21c items=0 ppid=2405 pid=2576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:58.167000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 13 07:37:58.171000 audit[2578]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2578 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 07:37:58.171000 audit[2578]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc3bd2aa90 a2=0 a3=7ffc3bd2aa7c items=0 ppid=2405 pid=2578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:58.171000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 07:37:58.176000 audit[2581]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2581 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 07:37:58.176000 audit[2581]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe7aeb4790 a2=0 a3=7ffe7aeb477c items=0 ppid=2405 pid=2581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:58.176000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 07:37:58.181000 audit[2583]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2583 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 13 07:37:58.181000 audit[2583]: SYSCALL arch=c000003e syscall=46 success=yes exit=2004 a0=3 a1=7fff51d07a50 a2=0 a3=7fff51d07a3c items=0 ppid=2405 pid=2583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:58.181000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 07:37:58.182000 audit[2583]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2583 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 13 07:37:58.182000 audit[2583]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7fff51d07a50 a2=0 a3=7fff51d07a3c items=0 ppid=2405 pid=2583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:37:58.182000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 07:38:00.217647 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1898311394.mount: Deactivated successfully. Dec 13 07:38:02.204664 env[1313]: time="2024-12-13T07:38:02.204554813Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.36.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:38:02.209863 env[1313]: time="2024-12-13T07:38:02.209796874Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:38:02.213060 env[1313]: time="2024-12-13T07:38:02.213024086Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.36.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:38:02.215988 env[1313]: time="2024-12-13T07:38:02.215952298Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:38:02.216978 env[1313]: time="2024-12-13T07:38:02.216939815Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Dec 13 07:38:02.223132 env[1313]: time="2024-12-13T07:38:02.223050137Z" level=info msg="CreateContainer within sandbox \"4f8eea62bebfbe5031aa77d2fcedf067045220a8dbc826c9d09e2590b5350068\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 07:38:02.238310 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1574586236.mount: Deactivated successfully. Dec 13 07:38:02.267117 env[1313]: time="2024-12-13T07:38:02.266995555Z" level=info msg="CreateContainer within sandbox \"4f8eea62bebfbe5031aa77d2fcedf067045220a8dbc826c9d09e2590b5350068\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"3828ab4c43a7600bd34c1320a455cce426146d27e50d0fc97333bdc242ee161e\"" Dec 13 07:38:02.270675 env[1313]: time="2024-12-13T07:38:02.268792002Z" level=info msg="StartContainer for \"3828ab4c43a7600bd34c1320a455cce426146d27e50d0fc97333bdc242ee161e\"" Dec 13 07:38:02.363038 env[1313]: time="2024-12-13T07:38:02.362948573Z" level=info msg="StartContainer for \"3828ab4c43a7600bd34c1320a455cce426146d27e50d0fc97333bdc242ee161e\" returns successfully" Dec 13 07:38:03.155942 kubelet[2265]: I1213 07:38:03.155875 2265 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-bhdz6" podStartSLOduration=7.154575747 podStartE2EDuration="7.154575747s" podCreationTimestamp="2024-12-13 07:37:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 07:37:58.144339447 +0000 UTC m=+13.421633798" watchObservedRunningTime="2024-12-13 07:38:03.154575747 +0000 UTC m=+18.431870092" Dec 13 07:38:03.157876 kubelet[2265]: I1213 07:38:03.156075 2265 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-dd5wb" podStartSLOduration=1.499237549 podStartE2EDuration="6.156026392s" podCreationTimestamp="2024-12-13 07:37:57 +0000 UTC" firstStartedPulling="2024-12-13 07:37:57.561860889 +0000 UTC m=+12.839155223" lastFinishedPulling="2024-12-13 07:38:02.218649732 +0000 UTC m=+17.495944066" observedRunningTime="2024-12-13 07:38:03.153114023 +0000 UTC m=+18.430408378" watchObservedRunningTime="2024-12-13 07:38:03.156026392 +0000 UTC m=+18.433320737" Dec 13 07:38:05.882406 kernel: kauditd_printk_skb: 143 callbacks suppressed Dec 13 07:38:05.883036 kernel: audit: type=1325 audit(1734075485.872:287): table=filter:89 family=2 entries=15 op=nft_register_rule pid=2623 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 07:38:05.872000 audit[2623]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2623 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 07:38:05.872000 audit[2623]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffddb7c61f0 a2=0 a3=7ffddb7c61dc items=0 ppid=2405 pid=2623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:05.894909 kernel: audit: type=1300 audit(1734075485.872:287): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffddb7c61f0 a2=0 a3=7ffddb7c61dc items=0 ppid=2405 pid=2623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:05.872000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 07:38:05.894000 audit[2623]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2623 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 07:38:05.903273 kernel: audit: type=1327 audit(1734075485.872:287): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 07:38:05.903423 kernel: audit: type=1325 audit(1734075485.894:288): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2623 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 07:38:05.903509 kernel: audit: type=1300 audit(1734075485.894:288): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffddb7c61f0 a2=0 a3=0 items=0 ppid=2405 pid=2623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:05.894000 audit[2623]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffddb7c61f0 a2=0 a3=0 items=0 ppid=2405 pid=2623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:05.894000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 07:38:05.929957 kernel: audit: type=1327 audit(1734075485.894:288): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 07:38:05.938000 audit[2625]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2625 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 07:38:05.943911 kernel: audit: type=1325 audit(1734075485.938:289): table=filter:91 family=2 entries=16 op=nft_register_rule pid=2625 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 07:38:05.938000 audit[2625]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fff50a48cb0 a2=0 a3=7fff50a48c9c items=0 ppid=2405 pid=2625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:05.951909 kernel: audit: type=1300 audit(1734075485.938:289): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7fff50a48cb0 a2=0 a3=7fff50a48c9c items=0 ppid=2405 pid=2625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:05.952038 kernel: audit: type=1327 audit(1734075485.938:289): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 07:38:05.938000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 07:38:05.956000 audit[2625]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2625 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 07:38:05.960910 kernel: audit: type=1325 audit(1734075485.956:290): table=nat:92 family=2 entries=12 op=nft_register_rule pid=2625 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 07:38:05.956000 audit[2625]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff50a48cb0 a2=0 a3=0 items=0 ppid=2405 pid=2625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:05.956000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 07:38:06.094219 kubelet[2265]: I1213 07:38:06.094143 2265 topology_manager.go:215] "Topology Admit Handler" podUID="ef06dc97-a31f-4426-bf65-a6daadc66aab" podNamespace="calico-system" podName="calico-typha-7cfbc94b7f-7tjxb" Dec 13 07:38:06.139225 kubelet[2265]: I1213 07:38:06.139050 2265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ef06dc97-a31f-4426-bf65-a6daadc66aab-typha-certs\") pod \"calico-typha-7cfbc94b7f-7tjxb\" (UID: \"ef06dc97-a31f-4426-bf65-a6daadc66aab\") " pod="calico-system/calico-typha-7cfbc94b7f-7tjxb" Dec 13 07:38:06.139225 kubelet[2265]: I1213 07:38:06.139122 2265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zgqm\" (UniqueName: \"kubernetes.io/projected/ef06dc97-a31f-4426-bf65-a6daadc66aab-kube-api-access-8zgqm\") pod \"calico-typha-7cfbc94b7f-7tjxb\" (UID: \"ef06dc97-a31f-4426-bf65-a6daadc66aab\") " pod="calico-system/calico-typha-7cfbc94b7f-7tjxb" Dec 13 07:38:06.139225 kubelet[2265]: I1213 07:38:06.139166 2265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef06dc97-a31f-4426-bf65-a6daadc66aab-tigera-ca-bundle\") pod \"calico-typha-7cfbc94b7f-7tjxb\" (UID: \"ef06dc97-a31f-4426-bf65-a6daadc66aab\") " pod="calico-system/calico-typha-7cfbc94b7f-7tjxb" Dec 13 07:38:06.246185 kubelet[2265]: I1213 07:38:06.246140 2265 topology_manager.go:215] "Topology Admit Handler" podUID="01fc9e99-f655-4bfc-96c4-cf2b3fe12197" podNamespace="calico-system" podName="calico-node-zcx7b" Dec 13 07:38:06.340867 kubelet[2265]: I1213 07:38:06.340809 2265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/01fc9e99-f655-4bfc-96c4-cf2b3fe12197-cni-log-dir\") pod \"calico-node-zcx7b\" (UID: \"01fc9e99-f655-4bfc-96c4-cf2b3fe12197\") " pod="calico-system/calico-node-zcx7b" Dec 13 07:38:06.341252 kubelet[2265]: I1213 07:38:06.341226 2265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jp42\" (UniqueName: \"kubernetes.io/projected/01fc9e99-f655-4bfc-96c4-cf2b3fe12197-kube-api-access-8jp42\") pod \"calico-node-zcx7b\" (UID: \"01fc9e99-f655-4bfc-96c4-cf2b3fe12197\") " pod="calico-system/calico-node-zcx7b" Dec 13 07:38:06.341392 kubelet[2265]: I1213 07:38:06.341369 2265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01fc9e99-f655-4bfc-96c4-cf2b3fe12197-tigera-ca-bundle\") pod \"calico-node-zcx7b\" (UID: \"01fc9e99-f655-4bfc-96c4-cf2b3fe12197\") " pod="calico-system/calico-node-zcx7b" Dec 13 07:38:06.341579 kubelet[2265]: I1213 07:38:06.341555 2265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/01fc9e99-f655-4bfc-96c4-cf2b3fe12197-var-run-calico\") pod \"calico-node-zcx7b\" (UID: \"01fc9e99-f655-4bfc-96c4-cf2b3fe12197\") " pod="calico-system/calico-node-zcx7b" Dec 13 07:38:06.341733 kubelet[2265]: I1213 07:38:06.341710 2265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/01fc9e99-f655-4bfc-96c4-cf2b3fe12197-flexvol-driver-host\") pod \"calico-node-zcx7b\" (UID: \"01fc9e99-f655-4bfc-96c4-cf2b3fe12197\") " pod="calico-system/calico-node-zcx7b" Dec 13 07:38:06.341876 kubelet[2265]: I1213 07:38:06.341854 2265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/01fc9e99-f655-4bfc-96c4-cf2b3fe12197-lib-modules\") pod \"calico-node-zcx7b\" (UID: \"01fc9e99-f655-4bfc-96c4-cf2b3fe12197\") " pod="calico-system/calico-node-zcx7b" Dec 13 07:38:06.342061 kubelet[2265]: I1213 07:38:06.342038 2265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/01fc9e99-f655-4bfc-96c4-cf2b3fe12197-policysync\") pod \"calico-node-zcx7b\" (UID: \"01fc9e99-f655-4bfc-96c4-cf2b3fe12197\") " pod="calico-system/calico-node-zcx7b" Dec 13 07:38:06.342239 kubelet[2265]: I1213 07:38:06.342199 2265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01fc9e99-f655-4bfc-96c4-cf2b3fe12197-xtables-lock\") pod \"calico-node-zcx7b\" (UID: \"01fc9e99-f655-4bfc-96c4-cf2b3fe12197\") " pod="calico-system/calico-node-zcx7b" Dec 13 07:38:06.342383 kubelet[2265]: I1213 07:38:06.342360 2265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/01fc9e99-f655-4bfc-96c4-cf2b3fe12197-node-certs\") pod \"calico-node-zcx7b\" (UID: \"01fc9e99-f655-4bfc-96c4-cf2b3fe12197\") " pod="calico-system/calico-node-zcx7b" Dec 13 07:38:06.342543 kubelet[2265]: I1213 07:38:06.342520 2265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/01fc9e99-f655-4bfc-96c4-cf2b3fe12197-var-lib-calico\") pod \"calico-node-zcx7b\" (UID: \"01fc9e99-f655-4bfc-96c4-cf2b3fe12197\") " pod="calico-system/calico-node-zcx7b" Dec 13 07:38:06.342678 kubelet[2265]: I1213 07:38:06.342655 2265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/01fc9e99-f655-4bfc-96c4-cf2b3fe12197-cni-bin-dir\") pod \"calico-node-zcx7b\" (UID: \"01fc9e99-f655-4bfc-96c4-cf2b3fe12197\") " pod="calico-system/calico-node-zcx7b" Dec 13 07:38:06.342807 kubelet[2265]: I1213 07:38:06.342785 2265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/01fc9e99-f655-4bfc-96c4-cf2b3fe12197-cni-net-dir\") pod \"calico-node-zcx7b\" (UID: \"01fc9e99-f655-4bfc-96c4-cf2b3fe12197\") " pod="calico-system/calico-node-zcx7b" Dec 13 07:38:06.394905 kubelet[2265]: I1213 07:38:06.394715 2265 topology_manager.go:215] "Topology Admit Handler" podUID="6755c6bd-417c-469c-9e0b-b65078e35af8" podNamespace="calico-system" podName="csi-node-driver-jgsz2" Dec 13 07:38:06.395581 kubelet[2265]: E1213 07:38:06.395553 2265 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jgsz2" podUID="6755c6bd-417c-469c-9e0b-b65078e35af8" Dec 13 07:38:06.407240 env[1313]: time="2024-12-13T07:38:06.407151070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7cfbc94b7f-7tjxb,Uid:ef06dc97-a31f-4426-bf65-a6daadc66aab,Namespace:calico-system,Attempt:0,}" Dec 13 07:38:06.443302 kubelet[2265]: I1213 07:38:06.443249 2265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6755c6bd-417c-469c-9e0b-b65078e35af8-varrun\") pod \"csi-node-driver-jgsz2\" (UID: \"6755c6bd-417c-469c-9e0b-b65078e35af8\") " pod="calico-system/csi-node-driver-jgsz2" Dec 13 07:38:06.443940 kubelet[2265]: I1213 07:38:06.443901 2265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6755c6bd-417c-469c-9e0b-b65078e35af8-kubelet-dir\") pod \"csi-node-driver-jgsz2\" (UID: \"6755c6bd-417c-469c-9e0b-b65078e35af8\") " pod="calico-system/csi-node-driver-jgsz2" Dec 13 07:38:06.445524 kubelet[2265]: I1213 07:38:06.445499 2265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6755c6bd-417c-469c-9e0b-b65078e35af8-registration-dir\") pod \"csi-node-driver-jgsz2\" (UID: \"6755c6bd-417c-469c-9e0b-b65078e35af8\") " pod="calico-system/csi-node-driver-jgsz2" Dec 13 07:38:06.445810 kubelet[2265]: I1213 07:38:06.445785 2265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6755c6bd-417c-469c-9e0b-b65078e35af8-socket-dir\") pod \"csi-node-driver-jgsz2\" (UID: \"6755c6bd-417c-469c-9e0b-b65078e35af8\") " pod="calico-system/csi-node-driver-jgsz2" Dec 13 07:38:06.453194 env[1313]: time="2024-12-13T07:38:06.452786572Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 07:38:06.453194 env[1313]: time="2024-12-13T07:38:06.453032002Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 07:38:06.453194 env[1313]: time="2024-12-13T07:38:06.453095592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 07:38:06.453512 env[1313]: time="2024-12-13T07:38:06.453448531Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/64c643f26193fafdd5e743cd4fab27f30e3698dcc7d5cb649144b2d4d82e8148 pid=2634 runtime=io.containerd.runc.v2 Dec 13 07:38:06.453734 kubelet[2265]: E1213 07:38:06.453703 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.453873 kubelet[2265]: W1213 07:38:06.453839 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.454043 kubelet[2265]: E1213 07:38:06.454018 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.464048 kubelet[2265]: E1213 07:38:06.464019 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.464473 kubelet[2265]: W1213 07:38:06.464195 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.464473 kubelet[2265]: E1213 07:38:06.464252 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.469019 kubelet[2265]: E1213 07:38:06.468995 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.469165 kubelet[2265]: W1213 07:38:06.469138 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.469349 kubelet[2265]: E1213 07:38:06.469311 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.473007 kubelet[2265]: E1213 07:38:06.472983 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.473129 kubelet[2265]: W1213 07:38:06.473102 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.473273 kubelet[2265]: E1213 07:38:06.473248 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.477106 kubelet[2265]: E1213 07:38:06.477082 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.477272 kubelet[2265]: W1213 07:38:06.477244 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.477723 kubelet[2265]: E1213 07:38:06.477701 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.479051 kubelet[2265]: W1213 07:38:06.479023 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.494944 kubelet[2265]: E1213 07:38:06.490775 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.495128 kubelet[2265]: W1213 07:38:06.495102 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.495302 kubelet[2265]: E1213 07:38:06.495277 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.499001 kubelet[2265]: E1213 07:38:06.498965 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.499222 kubelet[2265]: E1213 07:38:06.499195 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.499459 kubelet[2265]: E1213 07:38:06.499435 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.499594 kubelet[2265]: W1213 07:38:06.499567 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.502949 kubelet[2265]: E1213 07:38:06.502920 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.504033 kubelet[2265]: E1213 07:38:06.504010 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.504167 kubelet[2265]: W1213 07:38:06.504141 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.504304 kubelet[2265]: E1213 07:38:06.504281 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.508069 kubelet[2265]: E1213 07:38:06.508046 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.508345 kubelet[2265]: W1213 07:38:06.508305 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.508723 kubelet[2265]: E1213 07:38:06.508698 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.515915 kubelet[2265]: E1213 07:38:06.514030 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.515915 kubelet[2265]: W1213 07:38:06.514151 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.515915 kubelet[2265]: E1213 07:38:06.514239 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.517928 kubelet[2265]: E1213 07:38:06.517765 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.517928 kubelet[2265]: W1213 07:38:06.517792 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.518071 kubelet[2265]: E1213 07:38:06.517949 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.521439 kubelet[2265]: E1213 07:38:06.521396 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.521521 kubelet[2265]: W1213 07:38:06.521448 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.521521 kubelet[2265]: E1213 07:38:06.521474 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.533916 kubelet[2265]: E1213 07:38:06.532903 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.533916 kubelet[2265]: W1213 07:38:06.532936 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.533916 kubelet[2265]: E1213 07:38:06.532978 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.533916 kubelet[2265]: E1213 07:38:06.533315 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.533916 kubelet[2265]: W1213 07:38:06.533330 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.533916 kubelet[2265]: E1213 07:38:06.533446 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.533916 kubelet[2265]: E1213 07:38:06.533727 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.533916 kubelet[2265]: W1213 07:38:06.533742 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.533916 kubelet[2265]: E1213 07:38:06.533871 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.534411 kubelet[2265]: E1213 07:38:06.534098 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.534411 kubelet[2265]: W1213 07:38:06.534115 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.534411 kubelet[2265]: E1213 07:38:06.534276 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.534607 kubelet[2265]: E1213 07:38:06.534498 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.534607 kubelet[2265]: W1213 07:38:06.534512 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.534718 kubelet[2265]: E1213 07:38:06.534645 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.534718 kubelet[2265]: I1213 07:38:06.534686 2265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzf5f\" (UniqueName: \"kubernetes.io/projected/6755c6bd-417c-469c-9e0b-b65078e35af8-kube-api-access-dzf5f\") pod \"csi-node-driver-jgsz2\" (UID: \"6755c6bd-417c-469c-9e0b-b65078e35af8\") " pod="calico-system/csi-node-driver-jgsz2" Dec 13 07:38:06.537918 kubelet[2265]: E1213 07:38:06.534859 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.537918 kubelet[2265]: W1213 07:38:06.534938 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.537918 kubelet[2265]: E1213 07:38:06.534958 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.537918 kubelet[2265]: E1213 07:38:06.535197 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.537918 kubelet[2265]: W1213 07:38:06.535213 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.537918 kubelet[2265]: E1213 07:38:06.535231 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.537918 kubelet[2265]: E1213 07:38:06.535484 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.537918 kubelet[2265]: W1213 07:38:06.535497 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.537918 kubelet[2265]: E1213 07:38:06.535514 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.537918 kubelet[2265]: E1213 07:38:06.535800 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.538427 kubelet[2265]: W1213 07:38:06.535816 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.538427 kubelet[2265]: E1213 07:38:06.535833 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.538427 kubelet[2265]: E1213 07:38:06.536070 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.538427 kubelet[2265]: W1213 07:38:06.536082 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.538427 kubelet[2265]: E1213 07:38:06.536102 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.538427 kubelet[2265]: E1213 07:38:06.536332 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.538427 kubelet[2265]: W1213 07:38:06.536345 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.538427 kubelet[2265]: E1213 07:38:06.536392 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.538427 kubelet[2265]: E1213 07:38:06.536690 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.538427 kubelet[2265]: W1213 07:38:06.536704 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.538955 kubelet[2265]: E1213 07:38:06.536723 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.538955 kubelet[2265]: E1213 07:38:06.537020 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.538955 kubelet[2265]: W1213 07:38:06.537034 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.538955 kubelet[2265]: E1213 07:38:06.537051 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.538955 kubelet[2265]: E1213 07:38:06.537250 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.538955 kubelet[2265]: W1213 07:38:06.537263 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.538955 kubelet[2265]: E1213 07:38:06.537281 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.538955 kubelet[2265]: E1213 07:38:06.537533 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.538955 kubelet[2265]: W1213 07:38:06.537547 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.538955 kubelet[2265]: E1213 07:38:06.537563 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.539461 kubelet[2265]: E1213 07:38:06.538077 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.539461 kubelet[2265]: W1213 07:38:06.538091 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.539461 kubelet[2265]: E1213 07:38:06.538108 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.539461 kubelet[2265]: E1213 07:38:06.538342 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.539461 kubelet[2265]: W1213 07:38:06.538359 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.539461 kubelet[2265]: E1213 07:38:06.538375 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.539461 kubelet[2265]: E1213 07:38:06.538638 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.539461 kubelet[2265]: W1213 07:38:06.538651 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.539461 kubelet[2265]: E1213 07:38:06.538667 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.539461 kubelet[2265]: E1213 07:38:06.538919 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.545085 kubelet[2265]: W1213 07:38:06.538933 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.545085 kubelet[2265]: E1213 07:38:06.538950 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.545085 kubelet[2265]: E1213 07:38:06.539176 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.545085 kubelet[2265]: W1213 07:38:06.539189 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.545085 kubelet[2265]: E1213 07:38:06.539204 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.545085 kubelet[2265]: E1213 07:38:06.542215 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.545085 kubelet[2265]: W1213 07:38:06.542234 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.545085 kubelet[2265]: E1213 07:38:06.542253 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.568293 kubelet[2265]: E1213 07:38:06.568238 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.568577 kubelet[2265]: W1213 07:38:06.568547 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.568717 kubelet[2265]: E1213 07:38:06.568690 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.579304 env[1313]: time="2024-12-13T07:38:06.578630693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zcx7b,Uid:01fc9e99-f655-4bfc-96c4-cf2b3fe12197,Namespace:calico-system,Attempt:0,}" Dec 13 07:38:06.638027 kubelet[2265]: E1213 07:38:06.637671 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.638027 kubelet[2265]: W1213 07:38:06.637712 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.638027 kubelet[2265]: E1213 07:38:06.637741 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.639492 kubelet[2265]: E1213 07:38:06.638568 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.639492 kubelet[2265]: W1213 07:38:06.638591 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.639492 kubelet[2265]: E1213 07:38:06.638615 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.639492 kubelet[2265]: E1213 07:38:06.638953 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.639492 kubelet[2265]: W1213 07:38:06.638967 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.639492 kubelet[2265]: E1213 07:38:06.639000 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.639492 kubelet[2265]: E1213 07:38:06.639301 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.639492 kubelet[2265]: W1213 07:38:06.639315 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.639492 kubelet[2265]: E1213 07:38:06.639337 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.640402 kubelet[2265]: E1213 07:38:06.640284 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.640402 kubelet[2265]: W1213 07:38:06.640302 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.640620 kubelet[2265]: E1213 07:38:06.640595 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.640976 kubelet[2265]: E1213 07:38:06.640955 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.641122 kubelet[2265]: W1213 07:38:06.641097 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.641369 kubelet[2265]: E1213 07:38:06.641348 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.641568 kubelet[2265]: E1213 07:38:06.641547 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.641748 kubelet[2265]: W1213 07:38:06.641666 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.642041 kubelet[2265]: E1213 07:38:06.642019 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.642247 kubelet[2265]: E1213 07:38:06.642226 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.642441 kubelet[2265]: W1213 07:38:06.642403 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.642682 kubelet[2265]: E1213 07:38:06.642660 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.642990 kubelet[2265]: E1213 07:38:06.642969 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.643114 kubelet[2265]: W1213 07:38:06.643090 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.643399 kubelet[2265]: E1213 07:38:06.643377 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.643607 kubelet[2265]: E1213 07:38:06.643587 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.643724 kubelet[2265]: W1213 07:38:06.643699 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.653586 kubelet[2265]: E1213 07:38:06.646971 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.654218 kubelet[2265]: E1213 07:38:06.654192 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.654363 kubelet[2265]: W1213 07:38:06.654336 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.654640 kubelet[2265]: E1213 07:38:06.654616 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.655115 env[1313]: time="2024-12-13T07:38:06.640639226Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 07:38:06.655115 env[1313]: time="2024-12-13T07:38:06.640758970Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 07:38:06.655115 env[1313]: time="2024-12-13T07:38:06.640781836Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 07:38:06.655115 env[1313]: time="2024-12-13T07:38:06.641036129Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/78c0cac706257aca8ecb2645c99934fa45125e4c977a173d7d55c8797d00c334 pid=2704 runtime=io.containerd.runc.v2 Dec 13 07:38:06.655591 kubelet[2265]: E1213 07:38:06.655569 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.655722 kubelet[2265]: W1213 07:38:06.655697 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.656111 kubelet[2265]: E1213 07:38:06.656089 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.656410 kubelet[2265]: W1213 07:38:06.656385 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.656970 kubelet[2265]: E1213 07:38:06.656947 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.664762 kubelet[2265]: W1213 07:38:06.664721 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.667230 kubelet[2265]: E1213 07:38:06.657129 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.667767 kubelet[2265]: E1213 07:38:06.657142 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.668201 kubelet[2265]: E1213 07:38:06.668178 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.668596 kubelet[2265]: E1213 07:38:06.668573 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.668736 kubelet[2265]: W1213 07:38:06.668710 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.669330 kubelet[2265]: E1213 07:38:06.669309 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.669585 kubelet[2265]: W1213 07:38:06.669525 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.674904 kubelet[2265]: E1213 07:38:06.674157 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.674904 kubelet[2265]: W1213 07:38:06.674182 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.675138 kubelet[2265]: E1213 07:38:06.675084 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.675138 kubelet[2265]: W1213 07:38:06.675102 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.675368 kubelet[2265]: E1213 07:38:06.675343 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.675368 kubelet[2265]: W1213 07:38:06.675363 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.675531 kubelet[2265]: E1213 07:38:06.675386 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.675777 kubelet[2265]: E1213 07:38:06.675753 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.675777 kubelet[2265]: W1213 07:38:06.675773 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.675924 kubelet[2265]: E1213 07:38:06.675792 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.675924 kubelet[2265]: E1213 07:38:06.669759 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.676168 kubelet[2265]: E1213 07:38:06.676144 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.676168 kubelet[2265]: W1213 07:38:06.676165 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.676298 kubelet[2265]: E1213 07:38:06.676184 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.677090 kubelet[2265]: E1213 07:38:06.677064 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.687270 kubelet[2265]: E1213 07:38:06.677661 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.687270 kubelet[2265]: W1213 07:38:06.687112 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.687270 kubelet[2265]: E1213 07:38:06.677674 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.692744 kubelet[2265]: E1213 07:38:06.691061 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.692744 kubelet[2265]: E1213 07:38:06.677836 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.692744 kubelet[2265]: E1213 07:38:06.691676 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.692744 kubelet[2265]: W1213 07:38:06.691693 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.692744 kubelet[2265]: E1213 07:38:06.691931 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.692744 kubelet[2265]: E1213 07:38:06.692046 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.692744 kubelet[2265]: W1213 07:38:06.692060 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.692744 kubelet[2265]: E1213 07:38:06.692216 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.692744 kubelet[2265]: E1213 07:38:06.692447 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.692744 kubelet[2265]: W1213 07:38:06.692461 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.693434 kubelet[2265]: E1213 07:38:06.692479 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.712440 kubelet[2265]: E1213 07:38:06.711086 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:06.712440 kubelet[2265]: W1213 07:38:06.711116 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:06.712440 kubelet[2265]: E1213 07:38:06.711166 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:06.784220 env[1313]: time="2024-12-13T07:38:06.784126822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7cfbc94b7f-7tjxb,Uid:ef06dc97-a31f-4426-bf65-a6daadc66aab,Namespace:calico-system,Attempt:0,} returns sandbox id \"64c643f26193fafdd5e743cd4fab27f30e3698dcc7d5cb649144b2d4d82e8148\"" Dec 13 07:38:06.792468 env[1313]: time="2024-12-13T07:38:06.792304407Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 07:38:06.813953 env[1313]: time="2024-12-13T07:38:06.813442859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zcx7b,Uid:01fc9e99-f655-4bfc-96c4-cf2b3fe12197,Namespace:calico-system,Attempt:0,} returns sandbox id \"78c0cac706257aca8ecb2645c99934fa45125e4c977a173d7d55c8797d00c334\"" Dec 13 07:38:06.970000 audit[2772]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=2772 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 07:38:06.970000 audit[2772]: SYSCALL arch=c000003e syscall=46 success=yes exit=6652 a0=3 a1=7fffd80b0010 a2=0 a3=7fffd80afffc items=0 ppid=2405 pid=2772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:06.970000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 07:38:06.975000 audit[2772]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2772 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 07:38:06.975000 audit[2772]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffd80b0010 a2=0 a3=0 items=0 ppid=2405 pid=2772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:06.975000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 07:38:07.278104 systemd[1]: run-containerd-runc-k8s.io-64c643f26193fafdd5e743cd4fab27f30e3698dcc7d5cb649144b2d4d82e8148-runc.jfy5fT.mount: Deactivated successfully. Dec 13 07:38:08.041383 kubelet[2265]: E1213 07:38:08.041324 2265 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jgsz2" podUID="6755c6bd-417c-469c-9e0b-b65078e35af8" Dec 13 07:38:08.266957 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2318243825.mount: Deactivated successfully. Dec 13 07:38:09.975783 env[1313]: time="2024-12-13T07:38:09.975605224Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:38:09.977907 env[1313]: time="2024-12-13T07:38:09.977854875Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:38:09.979692 env[1313]: time="2024-12-13T07:38:09.979649605Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:38:09.981407 env[1313]: time="2024-12-13T07:38:09.981366078Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:38:09.982464 env[1313]: time="2024-12-13T07:38:09.982422317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Dec 13 07:38:09.983876 env[1313]: time="2024-12-13T07:38:09.983842496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 07:38:10.013235 env[1313]: time="2024-12-13T07:38:10.013165058Z" level=info msg="CreateContainer within sandbox \"64c643f26193fafdd5e743cd4fab27f30e3698dcc7d5cb649144b2d4d82e8148\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 07:38:10.031165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3525218857.mount: Deactivated successfully. Dec 13 07:38:10.037535 env[1313]: time="2024-12-13T07:38:10.037477334Z" level=info msg="CreateContainer within sandbox \"64c643f26193fafdd5e743cd4fab27f30e3698dcc7d5cb649144b2d4d82e8148\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"75966bf08147f13b656824bdb205645ff0154fbaadad016567cff98c3f6d5103\"" Dec 13 07:38:10.038875 env[1313]: time="2024-12-13T07:38:10.038839024Z" level=info msg="StartContainer for \"75966bf08147f13b656824bdb205645ff0154fbaadad016567cff98c3f6d5103\"" Dec 13 07:38:10.041169 kubelet[2265]: E1213 07:38:10.041064 2265 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jgsz2" podUID="6755c6bd-417c-469c-9e0b-b65078e35af8" Dec 13 07:38:10.170195 env[1313]: time="2024-12-13T07:38:10.170109925Z" level=info msg="StartContainer for \"75966bf08147f13b656824bdb205645ff0154fbaadad016567cff98c3f6d5103\" returns successfully" Dec 13 07:38:10.267712 kubelet[2265]: E1213 07:38:10.267573 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:10.267712 kubelet[2265]: W1213 07:38:10.267607 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:10.267712 kubelet[2265]: E1213 07:38:10.267653 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:10.268068 kubelet[2265]: E1213 07:38:10.267947 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:10.268068 kubelet[2265]: W1213 07:38:10.267970 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:10.268068 kubelet[2265]: E1213 07:38:10.267988 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:10.269192 kubelet[2265]: E1213 07:38:10.268259 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:10.269192 kubelet[2265]: W1213 07:38:10.268277 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:10.269192 kubelet[2265]: E1213 07:38:10.268295 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:10.269192 kubelet[2265]: E1213 07:38:10.268568 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:10.269192 kubelet[2265]: W1213 07:38:10.268591 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:10.269192 kubelet[2265]: E1213 07:38:10.268608 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:10.269192 kubelet[2265]: E1213 07:38:10.268889 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:10.269192 kubelet[2265]: W1213 07:38:10.268904 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:10.269192 kubelet[2265]: E1213 07:38:10.268921 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:10.269192 kubelet[2265]: E1213 07:38:10.269167 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:10.269767 kubelet[2265]: W1213 07:38:10.269190 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:10.269767 kubelet[2265]: E1213 07:38:10.269208 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:10.269767 kubelet[2265]: E1213 07:38:10.269535 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:10.269767 kubelet[2265]: W1213 07:38:10.269548 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:10.269767 kubelet[2265]: E1213 07:38:10.269578 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:10.270659 kubelet[2265]: E1213 07:38:10.269847 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:10.270659 kubelet[2265]: W1213 07:38:10.269871 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:10.270659 kubelet[2265]: E1213 07:38:10.269909 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:10.270659 kubelet[2265]: E1213 07:38:10.270198 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:10.270659 kubelet[2265]: W1213 07:38:10.270213 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:10.270659 kubelet[2265]: E1213 07:38:10.270231 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:10.270659 kubelet[2265]: E1213 07:38:10.270467 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:10.270659 kubelet[2265]: W1213 07:38:10.270490 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:10.270659 kubelet[2265]: E1213 07:38:10.270508 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:10.271137 kubelet[2265]: E1213 07:38:10.270739 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:10.271137 kubelet[2265]: W1213 07:38:10.270762 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:10.271137 kubelet[2265]: E1213 07:38:10.270780 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:10.271137 kubelet[2265]: E1213 07:38:10.271064 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:10.271137 kubelet[2265]: W1213 07:38:10.271078 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:10.271137 kubelet[2265]: E1213 07:38:10.271093 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:10.271439 kubelet[2265]: E1213 07:38:10.271408 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:10.271439 kubelet[2265]: W1213 07:38:10.271422 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:10.271439 kubelet[2265]: E1213 07:38:10.271438 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:10.271741 kubelet[2265]: E1213 07:38:10.271702 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:10.271813 kubelet[2265]: W1213 07:38:10.271745 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:10.271813 kubelet[2265]: E1213 07:38:10.271766 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:10.272122 kubelet[2265]: E1213 07:38:10.272072 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:10.272122 kubelet[2265]: W1213 07:38:10.272100 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:10.272122 kubelet[2265]: E1213 07:38:10.272117 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:10.272526 kubelet[2265]: E1213 07:38:10.272449 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:10.272526 kubelet[2265]: W1213 07:38:10.272468 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:10.272526 kubelet[2265]: E1213 07:38:10.272485 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:10.274646 kubelet[2265]: E1213 07:38:10.272775 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:10.274646 kubelet[2265]: W1213 07:38:10.272794 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:10.274646 kubelet[2265]: E1213 07:38:10.272823 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:10.275050 kubelet[2265]: E1213 07:38:10.275021 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:10.275050 kubelet[2265]: W1213 07:38:10.275043 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:10.275263 kubelet[2265]: E1213 07:38:10.275088 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:10.275411 kubelet[2265]: E1213 07:38:10.275297 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:10.275411 kubelet[2265]: W1213 07:38:10.275310 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:10.275411 kubelet[2265]: E1213 07:38:10.275362 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:10.275577 kubelet[2265]: E1213 07:38:10.275566 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:10.275709 kubelet[2265]: W1213 07:38:10.275579 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:10.275709 kubelet[2265]: E1213 07:38:10.275607 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:10.276790 kubelet[2265]: E1213 07:38:10.275842 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:10.276790 kubelet[2265]: W1213 07:38:10.275861 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:10.276790 kubelet[2265]: E1213 07:38:10.275963 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:10.276790 kubelet[2265]: E1213 07:38:10.276182 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:10.276790 kubelet[2265]: W1213 07:38:10.276196 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:10.276790 kubelet[2265]: E1213 07:38:10.276212 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:10.276790 kubelet[2265]: E1213 07:38:10.276469 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:10.276790 kubelet[2265]: W1213 07:38:10.276482 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:10.276790 kubelet[2265]: E1213 07:38:10.276498 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:10.277331 kubelet[2265]: E1213 07:38:10.277038 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:10.277331 kubelet[2265]: W1213 07:38:10.277052 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:10.277331 kubelet[2265]: E1213 07:38:10.277170 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:10.277473 kubelet[2265]: E1213 07:38:10.277400 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:10.277473 kubelet[2265]: W1213 07:38:10.277413 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:10.277571 kubelet[2265]: E1213 07:38:10.277510 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:10.278336 kubelet[2265]: E1213 07:38:10.277746 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:10.278336 kubelet[2265]: W1213 07:38:10.277766 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:10.278336 kubelet[2265]: E1213 07:38:10.277791 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:10.282049 kubelet[2265]: E1213 07:38:10.279864 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:10.282049 kubelet[2265]: W1213 07:38:10.279895 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:10.282049 kubelet[2265]: E1213 07:38:10.281980 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:10.282464 kubelet[2265]: E1213 07:38:10.282236 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:10.282464 kubelet[2265]: W1213 07:38:10.282250 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:10.282464 kubelet[2265]: E1213 07:38:10.282278 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:10.285064 kubelet[2265]: E1213 07:38:10.285022 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:10.285064 kubelet[2265]: W1213 07:38:10.285049 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:10.285224 kubelet[2265]: E1213 07:38:10.285090 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:10.288086 kubelet[2265]: E1213 07:38:10.288049 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:10.288086 kubelet[2265]: W1213 07:38:10.288076 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:10.288248 kubelet[2265]: E1213 07:38:10.288100 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:10.293166 kubelet[2265]: E1213 07:38:10.293130 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:10.293166 kubelet[2265]: W1213 07:38:10.293157 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:10.293365 kubelet[2265]: E1213 07:38:10.293185 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:10.295511 kubelet[2265]: E1213 07:38:10.295484 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:10.295643 kubelet[2265]: W1213 07:38:10.295508 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:10.295643 kubelet[2265]: E1213 07:38:10.295566 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:10.301479 kubelet[2265]: E1213 07:38:10.301445 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:10.301479 kubelet[2265]: W1213 07:38:10.301479 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:10.301654 kubelet[2265]: E1213 07:38:10.301501 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:11.187186 kubelet[2265]: I1213 07:38:11.187146 2265 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 07:38:11.279637 kubelet[2265]: E1213 07:38:11.278703 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:11.279637 kubelet[2265]: W1213 07:38:11.278753 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:11.279637 kubelet[2265]: E1213 07:38:11.278784 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:11.279637 kubelet[2265]: E1213 07:38:11.279137 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:11.279637 kubelet[2265]: W1213 07:38:11.279161 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:11.279637 kubelet[2265]: E1213 07:38:11.279188 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:11.279637 kubelet[2265]: E1213 07:38:11.279461 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:11.279637 kubelet[2265]: W1213 07:38:11.279474 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:11.279637 kubelet[2265]: E1213 07:38:11.279491 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:11.280798 kubelet[2265]: E1213 07:38:11.280575 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:11.280798 kubelet[2265]: W1213 07:38:11.280601 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:11.280798 kubelet[2265]: E1213 07:38:11.280621 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:11.281284 kubelet[2265]: E1213 07:38:11.281125 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:11.281284 kubelet[2265]: W1213 07:38:11.281141 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:11.281284 kubelet[2265]: E1213 07:38:11.281159 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:11.281814 kubelet[2265]: E1213 07:38:11.281563 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:11.281814 kubelet[2265]: W1213 07:38:11.281588 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:11.281814 kubelet[2265]: E1213 07:38:11.281606 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:11.282366 kubelet[2265]: E1213 07:38:11.282149 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:11.282366 kubelet[2265]: W1213 07:38:11.282174 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:11.282366 kubelet[2265]: E1213 07:38:11.282226 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:11.282817 kubelet[2265]: E1213 07:38:11.282626 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:11.282817 kubelet[2265]: W1213 07:38:11.282651 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:11.282817 kubelet[2265]: E1213 07:38:11.282672 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:11.283393 kubelet[2265]: E1213 07:38:11.283212 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:11.283393 kubelet[2265]: W1213 07:38:11.283237 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:11.283393 kubelet[2265]: E1213 07:38:11.283256 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:11.283801 kubelet[2265]: E1213 07:38:11.283647 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:11.283801 kubelet[2265]: W1213 07:38:11.283672 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:11.283801 kubelet[2265]: E1213 07:38:11.283691 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:11.284323 kubelet[2265]: E1213 07:38:11.284138 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:11.284323 kubelet[2265]: W1213 07:38:11.284175 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:11.284323 kubelet[2265]: E1213 07:38:11.284194 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:11.285086 kubelet[2265]: E1213 07:38:11.284567 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:11.285086 kubelet[2265]: W1213 07:38:11.284591 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:11.285086 kubelet[2265]: E1213 07:38:11.284610 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:11.285086 kubelet[2265]: E1213 07:38:11.284940 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:11.285086 kubelet[2265]: W1213 07:38:11.284952 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:11.285086 kubelet[2265]: E1213 07:38:11.284969 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:11.285676 kubelet[2265]: E1213 07:38:11.285524 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:11.285676 kubelet[2265]: W1213 07:38:11.285548 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:11.285676 kubelet[2265]: E1213 07:38:11.285566 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:11.286230 kubelet[2265]: E1213 07:38:11.285938 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:11.286230 kubelet[2265]: W1213 07:38:11.285972 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:11.286230 kubelet[2265]: E1213 07:38:11.285989 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:11.286674 kubelet[2265]: E1213 07:38:11.286644 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:11.286794 kubelet[2265]: W1213 07:38:11.286770 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:11.286959 kubelet[2265]: E1213 07:38:11.286938 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:11.287446 kubelet[2265]: E1213 07:38:11.287426 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:11.287605 kubelet[2265]: W1213 07:38:11.287582 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:11.287773 kubelet[2265]: E1213 07:38:11.287751 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:11.288315 kubelet[2265]: E1213 07:38:11.288194 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:11.288315 kubelet[2265]: W1213 07:38:11.288216 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:11.288315 kubelet[2265]: E1213 07:38:11.288243 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:11.288674 kubelet[2265]: E1213 07:38:11.288650 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:11.288674 kubelet[2265]: W1213 07:38:11.288669 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:11.288921 kubelet[2265]: E1213 07:38:11.288694 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:11.289200 kubelet[2265]: E1213 07:38:11.289172 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:11.289200 kubelet[2265]: W1213 07:38:11.289196 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:11.289341 kubelet[2265]: E1213 07:38:11.289220 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:11.289613 kubelet[2265]: E1213 07:38:11.289587 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:11.289613 kubelet[2265]: W1213 07:38:11.289606 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:11.289850 kubelet[2265]: E1213 07:38:11.289821 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:11.290683 kubelet[2265]: E1213 07:38:11.290659 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:11.290683 kubelet[2265]: W1213 07:38:11.290678 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:11.290829 kubelet[2265]: E1213 07:38:11.290803 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:11.291299 kubelet[2265]: E1213 07:38:11.291245 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:11.291299 kubelet[2265]: W1213 07:38:11.291296 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:11.291422 kubelet[2265]: E1213 07:38:11.291368 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:11.291764 kubelet[2265]: E1213 07:38:11.291739 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:11.291764 kubelet[2265]: W1213 07:38:11.291760 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:11.291955 kubelet[2265]: E1213 07:38:11.291813 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:11.294983 kubelet[2265]: E1213 07:38:11.292190 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:11.294983 kubelet[2265]: W1213 07:38:11.294984 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:11.295227 kubelet[2265]: E1213 07:38:11.295017 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:11.303947 kubelet[2265]: E1213 07:38:11.301684 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:11.303947 kubelet[2265]: W1213 07:38:11.301760 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:11.303947 kubelet[2265]: E1213 07:38:11.301922 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:11.303947 kubelet[2265]: E1213 07:38:11.302087 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:11.303947 kubelet[2265]: W1213 07:38:11.302100 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:11.303947 kubelet[2265]: E1213 07:38:11.302135 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:11.303947 kubelet[2265]: E1213 07:38:11.302384 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:11.303947 kubelet[2265]: W1213 07:38:11.302398 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:11.303947 kubelet[2265]: E1213 07:38:11.302420 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:11.303947 kubelet[2265]: E1213 07:38:11.302686 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:11.304626 kubelet[2265]: W1213 07:38:11.302708 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:11.304626 kubelet[2265]: E1213 07:38:11.302736 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:11.304626 kubelet[2265]: E1213 07:38:11.303172 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:11.304626 kubelet[2265]: W1213 07:38:11.303188 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:11.304626 kubelet[2265]: E1213 07:38:11.303224 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:11.306102 kubelet[2265]: E1213 07:38:11.305943 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:11.306102 kubelet[2265]: W1213 07:38:11.305965 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:11.306102 kubelet[2265]: E1213 07:38:11.305987 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:11.307550 kubelet[2265]: E1213 07:38:11.307522 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:11.307550 kubelet[2265]: W1213 07:38:11.307542 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:11.307801 kubelet[2265]: E1213 07:38:11.307764 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:11.308447 kubelet[2265]: E1213 07:38:11.308426 2265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 07:38:11.308577 kubelet[2265]: W1213 07:38:11.308551 2265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 07:38:11.308696 kubelet[2265]: E1213 07:38:11.308674 2265 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 07:38:11.704273 env[1313]: time="2024-12-13T07:38:11.704207751Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:38:11.706479 env[1313]: time="2024-12-13T07:38:11.706444331Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:38:11.708567 env[1313]: time="2024-12-13T07:38:11.708523662Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:38:11.711373 env[1313]: time="2024-12-13T07:38:11.711290341Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:38:11.713665 env[1313]: time="2024-12-13T07:38:11.713542624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Dec 13 07:38:11.717937 env[1313]: time="2024-12-13T07:38:11.717560679Z" level=info msg="CreateContainer within sandbox \"78c0cac706257aca8ecb2645c99934fa45125e4c977a173d7d55c8797d00c334\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 07:38:11.739707 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount656354330.mount: Deactivated successfully. Dec 13 07:38:11.747624 env[1313]: time="2024-12-13T07:38:11.747567810Z" level=info msg="CreateContainer within sandbox \"78c0cac706257aca8ecb2645c99934fa45125e4c977a173d7d55c8797d00c334\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a52f6e682317ced7964ded433ac7aa9279dc9847033b2d8c7c1a0f1bcecc4798\"" Dec 13 07:38:11.750494 env[1313]: time="2024-12-13T07:38:11.750447238Z" level=info msg="StartContainer for \"a52f6e682317ced7964ded433ac7aa9279dc9847033b2d8c7c1a0f1bcecc4798\"" Dec 13 07:38:11.857821 env[1313]: time="2024-12-13T07:38:11.857762533Z" level=info msg="StartContainer for \"a52f6e682317ced7964ded433ac7aa9279dc9847033b2d8c7c1a0f1bcecc4798\" returns successfully" Dec 13 07:38:11.936402 env[1313]: time="2024-12-13T07:38:11.936345235Z" level=info msg="shim disconnected" id=a52f6e682317ced7964ded433ac7aa9279dc9847033b2d8c7c1a0f1bcecc4798 Dec 13 07:38:11.936824 env[1313]: time="2024-12-13T07:38:11.936792591Z" level=warning msg="cleaning up after shim disconnected" id=a52f6e682317ced7964ded433ac7aa9279dc9847033b2d8c7c1a0f1bcecc4798 namespace=k8s.io Dec 13 07:38:11.937032 env[1313]: time="2024-12-13T07:38:11.937003186Z" level=info msg="cleaning up dead shim" Dec 13 07:38:11.948645 env[1313]: time="2024-12-13T07:38:11.948580714Z" level=warning msg="cleanup warnings time=\"2024-12-13T07:38:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2924 runtime=io.containerd.runc.v2\n" Dec 13 07:38:11.996981 systemd[1]: run-containerd-runc-k8s.io-a52f6e682317ced7964ded433ac7aa9279dc9847033b2d8c7c1a0f1bcecc4798-runc.389w62.mount: Deactivated successfully. Dec 13 07:38:11.997203 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a52f6e682317ced7964ded433ac7aa9279dc9847033b2d8c7c1a0f1bcecc4798-rootfs.mount: Deactivated successfully. Dec 13 07:38:12.041381 kubelet[2265]: E1213 07:38:12.040927 2265 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jgsz2" podUID="6755c6bd-417c-469c-9e0b-b65078e35af8" Dec 13 07:38:12.196624 env[1313]: time="2024-12-13T07:38:12.196450014Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 07:38:12.220591 kubelet[2265]: I1213 07:38:12.217020 2265 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-7cfbc94b7f-7tjxb" podStartSLOduration=3.025434198 podStartE2EDuration="6.216964782s" podCreationTimestamp="2024-12-13 07:38:06 +0000 UTC" firstStartedPulling="2024-12-13 07:38:06.791676224 +0000 UTC m=+22.068970551" lastFinishedPulling="2024-12-13 07:38:09.983206791 +0000 UTC m=+25.260501135" observedRunningTime="2024-12-13 07:38:10.213731321 +0000 UTC m=+25.491025670" watchObservedRunningTime="2024-12-13 07:38:12.216964782 +0000 UTC m=+27.494259134" Dec 13 07:38:14.040902 kubelet[2265]: E1213 07:38:14.040793 2265 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jgsz2" podUID="6755c6bd-417c-469c-9e0b-b65078e35af8" Dec 13 07:38:16.040529 kubelet[2265]: E1213 07:38:16.040467 2265 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jgsz2" podUID="6755c6bd-417c-469c-9e0b-b65078e35af8" Dec 13 07:38:18.040676 kubelet[2265]: E1213 07:38:18.040624 2265 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jgsz2" podUID="6755c6bd-417c-469c-9e0b-b65078e35af8" Dec 13 07:38:19.009574 env[1313]: time="2024-12-13T07:38:19.009512733Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:38:19.012953 env[1313]: time="2024-12-13T07:38:19.012901929Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:38:19.018435 env[1313]: time="2024-12-13T07:38:19.018339943Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:38:19.021336 env[1313]: time="2024-12-13T07:38:19.021301706Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:38:19.022406 env[1313]: time="2024-12-13T07:38:19.022359951Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Dec 13 07:38:19.028839 env[1313]: time="2024-12-13T07:38:19.028790357Z" level=info msg="CreateContainer within sandbox \"78c0cac706257aca8ecb2645c99934fa45125e4c977a173d7d55c8797d00c334\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 07:38:19.046889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount150733807.mount: Deactivated successfully. Dec 13 07:38:19.053924 env[1313]: time="2024-12-13T07:38:19.053840351Z" level=info msg="CreateContainer within sandbox \"78c0cac706257aca8ecb2645c99934fa45125e4c977a173d7d55c8797d00c334\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8daa92232baec4af530c4b5ed12b1dd63d4894b75e11fec506f86158e1046c26\"" Dec 13 07:38:19.057217 env[1313]: time="2024-12-13T07:38:19.055516434Z" level=info msg="StartContainer for \"8daa92232baec4af530c4b5ed12b1dd63d4894b75e11fec506f86158e1046c26\"" Dec 13 07:38:19.109670 systemd[1]: run-containerd-runc-k8s.io-8daa92232baec4af530c4b5ed12b1dd63d4894b75e11fec506f86158e1046c26-runc.vLGBx5.mount: Deactivated successfully. Dec 13 07:38:19.180037 env[1313]: time="2024-12-13T07:38:19.179970106Z" level=info msg="StartContainer for \"8daa92232baec4af530c4b5ed12b1dd63d4894b75e11fec506f86158e1046c26\" returns successfully" Dec 13 07:38:20.066922 kubelet[2265]: E1213 07:38:20.066841 2265 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jgsz2" podUID="6755c6bd-417c-469c-9e0b-b65078e35af8" Dec 13 07:38:20.219533 env[1313]: time="2024-12-13T07:38:20.219426708Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 07:38:20.239478 kubelet[2265]: I1213 07:38:20.239442 2265 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 07:38:20.269136 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8daa92232baec4af530c4b5ed12b1dd63d4894b75e11fec506f86158e1046c26-rootfs.mount: Deactivated successfully. Dec 13 07:38:20.282531 env[1313]: time="2024-12-13T07:38:20.282422915Z" level=info msg="shim disconnected" id=8daa92232baec4af530c4b5ed12b1dd63d4894b75e11fec506f86158e1046c26 Dec 13 07:38:20.282531 env[1313]: time="2024-12-13T07:38:20.282490170Z" level=warning msg="cleaning up after shim disconnected" id=8daa92232baec4af530c4b5ed12b1dd63d4894b75e11fec506f86158e1046c26 namespace=k8s.io Dec 13 07:38:20.282531 env[1313]: time="2024-12-13T07:38:20.282506781Z" level=info msg="cleaning up dead shim" Dec 13 07:38:20.297520 kubelet[2265]: I1213 07:38:20.295565 2265 topology_manager.go:215] "Topology Admit Handler" podUID="58a0fd67-8895-4d73-a1c6-6bb7d5fa543a" podNamespace="kube-system" podName="coredns-76f75df574-msfc4" Dec 13 07:38:20.312018 kubelet[2265]: I1213 07:38:20.311974 2265 topology_manager.go:215] "Topology Admit Handler" podUID="acbf9fc3-226f-42b6-886d-99d3e542c4e9" podNamespace="calico-apiserver" podName="calico-apiserver-8888c77cc-8shn8" Dec 13 07:38:20.313790 kubelet[2265]: I1213 07:38:20.313761 2265 topology_manager.go:215] "Topology Admit Handler" podUID="7ce172f1-b87c-421e-99dc-6fd967b7a919" podNamespace="kube-system" podName="coredns-76f75df574-cfqbg" Dec 13 07:38:20.313904 env[1313]: time="2024-12-13T07:38:20.313809745Z" level=warning msg="cleanup warnings time=\"2024-12-13T07:38:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2994 runtime=io.containerd.runc.v2\n" Dec 13 07:38:20.314186 kubelet[2265]: I1213 07:38:20.314142 2265 topology_manager.go:215] "Topology Admit Handler" podUID="d8ba17ac-a602-4e90-9b1d-ab55d1e3cb01" podNamespace="calico-apiserver" podName="calico-apiserver-8888c77cc-mzdlq" Dec 13 07:38:20.323588 kubelet[2265]: I1213 07:38:20.323544 2265 topology_manager.go:215] "Topology Admit Handler" podUID="1bec6ab5-1944-4370-b6de-fe55870be674" podNamespace="calico-system" podName="calico-kube-controllers-7ccc67bff4-xdz9n" Dec 13 07:38:20.461162 kubelet[2265]: I1213 07:38:20.461041 2265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/58a0fd67-8895-4d73-a1c6-6bb7d5fa543a-config-volume\") pod \"coredns-76f75df574-msfc4\" (UID: \"58a0fd67-8895-4d73-a1c6-6bb7d5fa543a\") " pod="kube-system/coredns-76f75df574-msfc4" Dec 13 07:38:20.461498 kubelet[2265]: I1213 07:38:20.461209 2265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9h7t\" (UniqueName: \"kubernetes.io/projected/58a0fd67-8895-4d73-a1c6-6bb7d5fa543a-kube-api-access-m9h7t\") pod \"coredns-76f75df574-msfc4\" (UID: \"58a0fd67-8895-4d73-a1c6-6bb7d5fa543a\") " pod="kube-system/coredns-76f75df574-msfc4" Dec 13 07:38:20.461498 kubelet[2265]: I1213 07:38:20.461311 2265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d8ba17ac-a602-4e90-9b1d-ab55d1e3cb01-calico-apiserver-certs\") pod \"calico-apiserver-8888c77cc-mzdlq\" (UID: \"d8ba17ac-a602-4e90-9b1d-ab55d1e3cb01\") " pod="calico-apiserver/calico-apiserver-8888c77cc-mzdlq" Dec 13 07:38:20.461498 kubelet[2265]: I1213 07:38:20.461406 2265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mstq\" (UniqueName: \"kubernetes.io/projected/1bec6ab5-1944-4370-b6de-fe55870be674-kube-api-access-4mstq\") pod \"calico-kube-controllers-7ccc67bff4-xdz9n\" (UID: \"1bec6ab5-1944-4370-b6de-fe55870be674\") " pod="calico-system/calico-kube-controllers-7ccc67bff4-xdz9n" Dec 13 07:38:20.461498 kubelet[2265]: I1213 07:38:20.461484 2265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rltx6\" (UniqueName: \"kubernetes.io/projected/acbf9fc3-226f-42b6-886d-99d3e542c4e9-kube-api-access-rltx6\") pod \"calico-apiserver-8888c77cc-8shn8\" (UID: \"acbf9fc3-226f-42b6-886d-99d3e542c4e9\") " pod="calico-apiserver/calico-apiserver-8888c77cc-8shn8" Dec 13 07:38:20.461791 kubelet[2265]: I1213 07:38:20.461571 2265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdvwx\" (UniqueName: \"kubernetes.io/projected/7ce172f1-b87c-421e-99dc-6fd967b7a919-kube-api-access-wdvwx\") pod \"coredns-76f75df574-cfqbg\" (UID: \"7ce172f1-b87c-421e-99dc-6fd967b7a919\") " pod="kube-system/coredns-76f75df574-cfqbg" Dec 13 07:38:20.461791 kubelet[2265]: I1213 07:38:20.461652 2265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/acbf9fc3-226f-42b6-886d-99d3e542c4e9-calico-apiserver-certs\") pod \"calico-apiserver-8888c77cc-8shn8\" (UID: \"acbf9fc3-226f-42b6-886d-99d3e542c4e9\") " pod="calico-apiserver/calico-apiserver-8888c77cc-8shn8" Dec 13 07:38:20.461791 kubelet[2265]: I1213 07:38:20.461728 2265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bec6ab5-1944-4370-b6de-fe55870be674-tigera-ca-bundle\") pod \"calico-kube-controllers-7ccc67bff4-xdz9n\" (UID: \"1bec6ab5-1944-4370-b6de-fe55870be674\") " pod="calico-system/calico-kube-controllers-7ccc67bff4-xdz9n" Dec 13 07:38:20.462009 kubelet[2265]: I1213 07:38:20.461806 2265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7ce172f1-b87c-421e-99dc-6fd967b7a919-config-volume\") pod \"coredns-76f75df574-cfqbg\" (UID: \"7ce172f1-b87c-421e-99dc-6fd967b7a919\") " pod="kube-system/coredns-76f75df574-cfqbg" Dec 13 07:38:20.462009 kubelet[2265]: I1213 07:38:20.461843 2265 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzmsd\" (UniqueName: \"kubernetes.io/projected/d8ba17ac-a602-4e90-9b1d-ab55d1e3cb01-kube-api-access-fzmsd\") pod \"calico-apiserver-8888c77cc-mzdlq\" (UID: \"d8ba17ac-a602-4e90-9b1d-ab55d1e3cb01\") " pod="calico-apiserver/calico-apiserver-8888c77cc-mzdlq" Dec 13 07:38:20.621513 env[1313]: time="2024-12-13T07:38:20.620669067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8888c77cc-mzdlq,Uid:d8ba17ac-a602-4e90-9b1d-ab55d1e3cb01,Namespace:calico-apiserver,Attempt:0,}" Dec 13 07:38:20.642265 env[1313]: time="2024-12-13T07:38:20.642211513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8888c77cc-8shn8,Uid:acbf9fc3-226f-42b6-886d-99d3e542c4e9,Namespace:calico-apiserver,Attempt:0,}" Dec 13 07:38:20.646613 env[1313]: time="2024-12-13T07:38:20.646575028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7ccc67bff4-xdz9n,Uid:1bec6ab5-1944-4370-b6de-fe55870be674,Namespace:calico-system,Attempt:0,}" Dec 13 07:38:20.650682 env[1313]: time="2024-12-13T07:38:20.650641981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-cfqbg,Uid:7ce172f1-b87c-421e-99dc-6fd967b7a919,Namespace:kube-system,Attempt:0,}" Dec 13 07:38:20.909842 env[1313]: time="2024-12-13T07:38:20.908764175Z" level=error msg="Failed to destroy network for sandbox \"d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 07:38:20.910704 env[1313]: time="2024-12-13T07:38:20.910387534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-msfc4,Uid:58a0fd67-8895-4d73-a1c6-6bb7d5fa543a,Namespace:kube-system,Attempt:0,}" Dec 13 07:38:20.911641 env[1313]: time="2024-12-13T07:38:20.911562728Z" level=error msg="encountered an error cleaning up failed sandbox \"d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 07:38:20.911841 env[1313]: time="2024-12-13T07:38:20.911786320Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8888c77cc-mzdlq,Uid:d8ba17ac-a602-4e90-9b1d-ab55d1e3cb01,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 07:38:20.912833 kubelet[2265]: E1213 07:38:20.912325 2265 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 07:38:20.912833 kubelet[2265]: E1213 07:38:20.912402 2265 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8888c77cc-mzdlq" Dec 13 07:38:20.912833 kubelet[2265]: E1213 07:38:20.912438 2265 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8888c77cc-mzdlq" Dec 13 07:38:20.913111 kubelet[2265]: E1213 07:38:20.912505 2265 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8888c77cc-mzdlq_calico-apiserver(d8ba17ac-a602-4e90-9b1d-ab55d1e3cb01)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8888c77cc-mzdlq_calico-apiserver(d8ba17ac-a602-4e90-9b1d-ab55d1e3cb01)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8888c77cc-mzdlq" podUID="d8ba17ac-a602-4e90-9b1d-ab55d1e3cb01" Dec 13 07:38:20.923131 env[1313]: time="2024-12-13T07:38:20.923048700Z" level=error msg="Failed to destroy network for sandbox \"0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 07:38:20.923662 env[1313]: time="2024-12-13T07:38:20.923564909Z" level=error msg="encountered an error cleaning up failed sandbox \"0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 07:38:20.923662 env[1313]: time="2024-12-13T07:38:20.923637676Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7ccc67bff4-xdz9n,Uid:1bec6ab5-1944-4370-b6de-fe55870be674,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 07:38:20.924696 kubelet[2265]: E1213 07:38:20.924099 2265 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 07:38:20.924696 kubelet[2265]: E1213 07:38:20.924164 2265 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7ccc67bff4-xdz9n" Dec 13 07:38:20.924696 kubelet[2265]: E1213 07:38:20.924238 2265 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7ccc67bff4-xdz9n" Dec 13 07:38:20.924933 kubelet[2265]: E1213 07:38:20.924306 2265 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7ccc67bff4-xdz9n_calico-system(1bec6ab5-1944-4370-b6de-fe55870be674)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7ccc67bff4-xdz9n_calico-system(1bec6ab5-1944-4370-b6de-fe55870be674)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7ccc67bff4-xdz9n" podUID="1bec6ab5-1944-4370-b6de-fe55870be674" Dec 13 07:38:20.932770 env[1313]: time="2024-12-13T07:38:20.932704023Z" level=error msg="Failed to destroy network for sandbox \"6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 07:38:20.933215 env[1313]: time="2024-12-13T07:38:20.933156805Z" level=error msg="encountered an error cleaning up failed sandbox \"6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 07:38:20.933301 env[1313]: time="2024-12-13T07:38:20.933233845Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8888c77cc-8shn8,Uid:acbf9fc3-226f-42b6-886d-99d3e542c4e9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 07:38:20.934965 kubelet[2265]: E1213 07:38:20.933596 2265 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 07:38:20.934965 kubelet[2265]: E1213 07:38:20.933681 2265 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8888c77cc-8shn8" Dec 13 07:38:20.934965 kubelet[2265]: E1213 07:38:20.933729 2265 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8888c77cc-8shn8" Dec 13 07:38:20.935399 kubelet[2265]: E1213 07:38:20.933818 2265 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8888c77cc-8shn8_calico-apiserver(acbf9fc3-226f-42b6-886d-99d3e542c4e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8888c77cc-8shn8_calico-apiserver(acbf9fc3-226f-42b6-886d-99d3e542c4e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8888c77cc-8shn8" podUID="acbf9fc3-226f-42b6-886d-99d3e542c4e9" Dec 13 07:38:20.937764 env[1313]: time="2024-12-13T07:38:20.937713863Z" level=error msg="Failed to destroy network for sandbox \"df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 07:38:20.938219 env[1313]: time="2024-12-13T07:38:20.938160499Z" level=error msg="encountered an error cleaning up failed sandbox \"df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 07:38:20.938304 env[1313]: time="2024-12-13T07:38:20.938232023Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-cfqbg,Uid:7ce172f1-b87c-421e-99dc-6fd967b7a919,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 07:38:20.939008 kubelet[2265]: E1213 07:38:20.938508 2265 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 07:38:20.939008 kubelet[2265]: E1213 07:38:20.938573 2265 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-cfqbg" Dec 13 07:38:20.939008 kubelet[2265]: E1213 07:38:20.938618 2265 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-cfqbg" Dec 13 07:38:20.940920 kubelet[2265]: E1213 07:38:20.938703 2265 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-cfqbg_kube-system(7ce172f1-b87c-421e-99dc-6fd967b7a919)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-cfqbg_kube-system(7ce172f1-b87c-421e-99dc-6fd967b7a919)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-cfqbg" podUID="7ce172f1-b87c-421e-99dc-6fd967b7a919" Dec 13 07:38:21.010588 env[1313]: time="2024-12-13T07:38:21.010496465Z" level=error msg="Failed to destroy network for sandbox \"5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 07:38:21.011820 env[1313]: time="2024-12-13T07:38:21.011775601Z" level=error msg="encountered an error cleaning up failed sandbox \"5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 07:38:21.012023 env[1313]: time="2024-12-13T07:38:21.011976447Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-msfc4,Uid:58a0fd67-8895-4d73-a1c6-6bb7d5fa543a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 07:38:21.012976 kubelet[2265]: E1213 07:38:21.012567 2265 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 07:38:21.012976 kubelet[2265]: E1213 07:38:21.012717 2265 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-msfc4" Dec 13 07:38:21.012976 kubelet[2265]: E1213 07:38:21.012799 2265 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-msfc4" Dec 13 07:38:21.013427 kubelet[2265]: E1213 07:38:21.012911 2265 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-msfc4_kube-system(58a0fd67-8895-4d73-a1c6-6bb7d5fa543a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-msfc4_kube-system(58a0fd67-8895-4d73-a1c6-6bb7d5fa543a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-msfc4" podUID="58a0fd67-8895-4d73-a1c6-6bb7d5fa543a" Dec 13 07:38:21.238116 kubelet[2265]: I1213 07:38:21.236753 2265 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7" Dec 13 07:38:21.241519 kubelet[2265]: I1213 07:38:21.241484 2265 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28" Dec 13 07:38:21.245248 env[1313]: time="2024-12-13T07:38:21.245180982Z" level=info msg="StopPodSandbox for \"0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28\"" Dec 13 07:38:21.245794 env[1313]: time="2024-12-13T07:38:21.245759990Z" level=info msg="StopPodSandbox for \"d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7\"" Dec 13 07:38:21.252937 env[1313]: time="2024-12-13T07:38:21.252806608Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 07:38:21.259666 kubelet[2265]: I1213 07:38:21.259627 2265 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb" Dec 13 07:38:21.260754 env[1313]: time="2024-12-13T07:38:21.260713348Z" level=info msg="StopPodSandbox for \"5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb\"" Dec 13 07:38:21.262821 kubelet[2265]: I1213 07:38:21.262474 2265 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6" Dec 13 07:38:21.264156 env[1313]: time="2024-12-13T07:38:21.264116047Z" level=info msg="StopPodSandbox for \"6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6\"" Dec 13 07:38:21.290312 kubelet[2265]: I1213 07:38:21.289155 2265 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5" Dec 13 07:38:21.293486 env[1313]: time="2024-12-13T07:38:21.293428432Z" level=info msg="StopPodSandbox for \"df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5\"" Dec 13 07:38:21.397926 env[1313]: time="2024-12-13T07:38:21.397829773Z" level=error msg="StopPodSandbox for \"0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28\" failed" error="failed to destroy network for sandbox \"0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 07:38:21.398903 kubelet[2265]: E1213 07:38:21.398556 2265 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28" Dec 13 07:38:21.398903 kubelet[2265]: E1213 07:38:21.398706 2265 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28"} Dec 13 07:38:21.398903 kubelet[2265]: E1213 07:38:21.398778 2265 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1bec6ab5-1944-4370-b6de-fe55870be674\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 07:38:21.398903 kubelet[2265]: E1213 07:38:21.398822 2265 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1bec6ab5-1944-4370-b6de-fe55870be674\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7ccc67bff4-xdz9n" podUID="1bec6ab5-1944-4370-b6de-fe55870be674" Dec 13 07:38:21.458215 env[1313]: time="2024-12-13T07:38:21.458131507Z" level=error msg="StopPodSandbox for \"d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7\" failed" error="failed to destroy network for sandbox \"d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 07:38:21.459119 kubelet[2265]: E1213 07:38:21.458838 2265 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7" Dec 13 07:38:21.459119 kubelet[2265]: E1213 07:38:21.458943 2265 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7"} Dec 13 07:38:21.459119 kubelet[2265]: E1213 07:38:21.459021 2265 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d8ba17ac-a602-4e90-9b1d-ab55d1e3cb01\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 07:38:21.459119 kubelet[2265]: E1213 07:38:21.459082 2265 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d8ba17ac-a602-4e90-9b1d-ab55d1e3cb01\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8888c77cc-mzdlq" podUID="d8ba17ac-a602-4e90-9b1d-ab55d1e3cb01" Dec 13 07:38:21.471312 env[1313]: time="2024-12-13T07:38:21.471217557Z" level=error msg="StopPodSandbox for \"5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb\" failed" error="failed to destroy network for sandbox \"5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 07:38:21.472220 kubelet[2265]: E1213 07:38:21.471933 2265 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb" Dec 13 07:38:21.472220 kubelet[2265]: E1213 07:38:21.472019 2265 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb"} Dec 13 07:38:21.472220 kubelet[2265]: E1213 07:38:21.472072 2265 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"58a0fd67-8895-4d73-a1c6-6bb7d5fa543a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 07:38:21.472220 kubelet[2265]: E1213 07:38:21.472128 2265 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"58a0fd67-8895-4d73-a1c6-6bb7d5fa543a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-msfc4" podUID="58a0fd67-8895-4d73-a1c6-6bb7d5fa543a" Dec 13 07:38:21.491973 env[1313]: time="2024-12-13T07:38:21.490714251Z" level=error msg="StopPodSandbox for \"6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6\" failed" error="failed to destroy network for sandbox \"6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 07:38:21.492701 kubelet[2265]: E1213 07:38:21.492476 2265 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6" Dec 13 07:38:21.492701 kubelet[2265]: E1213 07:38:21.492535 2265 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6"} Dec 13 07:38:21.492701 kubelet[2265]: E1213 07:38:21.492590 2265 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"acbf9fc3-226f-42b6-886d-99d3e542c4e9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 07:38:21.492701 kubelet[2265]: E1213 07:38:21.492644 2265 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"acbf9fc3-226f-42b6-886d-99d3e542c4e9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8888c77cc-8shn8" podUID="acbf9fc3-226f-42b6-886d-99d3e542c4e9" Dec 13 07:38:21.495073 env[1313]: time="2024-12-13T07:38:21.495026027Z" level=error msg="StopPodSandbox for \"df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5\" failed" error="failed to destroy network for sandbox \"df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 07:38:21.495626 kubelet[2265]: E1213 07:38:21.495422 2265 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5" Dec 13 07:38:21.495626 kubelet[2265]: E1213 07:38:21.495459 2265 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5"} Dec 13 07:38:21.495626 kubelet[2265]: E1213 07:38:21.495523 2265 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7ce172f1-b87c-421e-99dc-6fd967b7a919\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 07:38:21.495626 kubelet[2265]: E1213 07:38:21.495586 2265 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7ce172f1-b87c-421e-99dc-6fd967b7a919\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-cfqbg" podUID="7ce172f1-b87c-421e-99dc-6fd967b7a919" Dec 13 07:38:22.046303 env[1313]: time="2024-12-13T07:38:22.046222259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jgsz2,Uid:6755c6bd-417c-469c-9e0b-b65078e35af8,Namespace:calico-system,Attempt:0,}" Dec 13 07:38:22.134630 env[1313]: time="2024-12-13T07:38:22.134540831Z" level=error msg="Failed to destroy network for sandbox \"f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 07:38:22.140089 env[1313]: time="2024-12-13T07:38:22.138396547Z" level=error msg="encountered an error cleaning up failed sandbox \"f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 07:38:22.140089 env[1313]: time="2024-12-13T07:38:22.138465084Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jgsz2,Uid:6755c6bd-417c-469c-9e0b-b65078e35af8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 07:38:22.140226 kubelet[2265]: E1213 07:38:22.138802 2265 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 07:38:22.140226 kubelet[2265]: E1213 07:38:22.138909 2265 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jgsz2" Dec 13 07:38:22.140226 kubelet[2265]: E1213 07:38:22.138943 2265 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jgsz2" Dec 13 07:38:22.137848 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e-shm.mount: Deactivated successfully. Dec 13 07:38:22.140831 kubelet[2265]: E1213 07:38:22.139022 2265 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jgsz2_calico-system(6755c6bd-417c-469c-9e0b-b65078e35af8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jgsz2_calico-system(6755c6bd-417c-469c-9e0b-b65078e35af8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jgsz2" podUID="6755c6bd-417c-469c-9e0b-b65078e35af8" Dec 13 07:38:22.293963 kubelet[2265]: I1213 07:38:22.293143 2265 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e" Dec 13 07:38:22.295126 env[1313]: time="2024-12-13T07:38:22.295080318Z" level=info msg="StopPodSandbox for \"f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e\"" Dec 13 07:38:22.334627 env[1313]: time="2024-12-13T07:38:22.334541225Z" level=error msg="StopPodSandbox for \"f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e\" failed" error="failed to destroy network for sandbox \"f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 07:38:22.335302 kubelet[2265]: E1213 07:38:22.335072 2265 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e" Dec 13 07:38:22.335302 kubelet[2265]: E1213 07:38:22.335134 2265 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e"} Dec 13 07:38:22.335302 kubelet[2265]: E1213 07:38:22.335186 2265 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6755c6bd-417c-469c-9e0b-b65078e35af8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 07:38:22.335302 kubelet[2265]: E1213 07:38:22.335237 2265 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6755c6bd-417c-469c-9e0b-b65078e35af8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jgsz2" podUID="6755c6bd-417c-469c-9e0b-b65078e35af8" Dec 13 07:38:32.048931 env[1313]: time="2024-12-13T07:38:32.047936403Z" level=info msg="StopPodSandbox for \"6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6\"" Dec 13 07:38:32.120169 env[1313]: time="2024-12-13T07:38:32.120003263Z" level=error msg="StopPodSandbox for \"6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6\" failed" error="failed to destroy network for sandbox \"6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 07:38:32.121712 kubelet[2265]: E1213 07:38:32.121337 2265 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6" Dec 13 07:38:32.121712 kubelet[2265]: E1213 07:38:32.121457 2265 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6"} Dec 13 07:38:32.121712 kubelet[2265]: E1213 07:38:32.121557 2265 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"acbf9fc3-226f-42b6-886d-99d3e542c4e9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 07:38:32.121712 kubelet[2265]: E1213 07:38:32.121640 2265 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"acbf9fc3-226f-42b6-886d-99d3e542c4e9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8888c77cc-8shn8" podUID="acbf9fc3-226f-42b6-886d-99d3e542c4e9" Dec 13 07:38:32.435302 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4163661974.mount: Deactivated successfully. Dec 13 07:38:32.498257 env[1313]: time="2024-12-13T07:38:32.498132488Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:38:32.501158 env[1313]: time="2024-12-13T07:38:32.501102322Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:38:32.507985 env[1313]: time="2024-12-13T07:38:32.507931088Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:38:32.508954 env[1313]: time="2024-12-13T07:38:32.508921125Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:38:32.510921 env[1313]: time="2024-12-13T07:38:32.510862648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Dec 13 07:38:32.550471 env[1313]: time="2024-12-13T07:38:32.550407184Z" level=info msg="CreateContainer within sandbox \"78c0cac706257aca8ecb2645c99934fa45125e4c977a173d7d55c8797d00c334\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 07:38:32.572214 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3387754478.mount: Deactivated successfully. Dec 13 07:38:32.576436 env[1313]: time="2024-12-13T07:38:32.576376770Z" level=info msg="CreateContainer within sandbox \"78c0cac706257aca8ecb2645c99934fa45125e4c977a173d7d55c8797d00c334\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f686aa7caff47cf7895fa8aec1a3132bcab42b03efa7a9dbef0ec320ee2de6cd\"" Dec 13 07:38:32.579938 env[1313]: time="2024-12-13T07:38:32.579876792Z" level=info msg="StartContainer for \"f686aa7caff47cf7895fa8aec1a3132bcab42b03efa7a9dbef0ec320ee2de6cd\"" Dec 13 07:38:32.700234 env[1313]: time="2024-12-13T07:38:32.700006642Z" level=info msg="StartContainer for \"f686aa7caff47cf7895fa8aec1a3132bcab42b03efa7a9dbef0ec320ee2de6cd\" returns successfully" Dec 13 07:38:33.044015 env[1313]: time="2024-12-13T07:38:33.043493341Z" level=info msg="StopPodSandbox for \"0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28\"" Dec 13 07:38:33.048420 env[1313]: time="2024-12-13T07:38:33.045713029Z" level=info msg="StopPodSandbox for \"f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e\"" Dec 13 07:38:33.182816 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 07:38:33.183203 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 07:38:33.188145 env[1313]: time="2024-12-13T07:38:33.188019576Z" level=error msg="StopPodSandbox for \"0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28\" failed" error="failed to destroy network for sandbox \"0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 07:38:33.189363 kubelet[2265]: E1213 07:38:33.188643 2265 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28" Dec 13 07:38:33.189363 kubelet[2265]: E1213 07:38:33.188811 2265 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28"} Dec 13 07:38:33.189363 kubelet[2265]: E1213 07:38:33.189079 2265 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1bec6ab5-1944-4370-b6de-fe55870be674\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 07:38:33.189363 kubelet[2265]: E1213 07:38:33.189236 2265 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1bec6ab5-1944-4370-b6de-fe55870be674\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7ccc67bff4-xdz9n" podUID="1bec6ab5-1944-4370-b6de-fe55870be674" Dec 13 07:38:33.190753 env[1313]: time="2024-12-13T07:38:33.190673953Z" level=error msg="StopPodSandbox for \"f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e\" failed" error="failed to destroy network for sandbox \"f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 07:38:33.191222 kubelet[2265]: E1213 07:38:33.191160 2265 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e" Dec 13 07:38:33.191222 kubelet[2265]: E1213 07:38:33.191206 2265 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e"} Dec 13 07:38:33.192409 kubelet[2265]: E1213 07:38:33.191257 2265 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6755c6bd-417c-469c-9e0b-b65078e35af8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 07:38:33.192409 kubelet[2265]: E1213 07:38:33.191305 2265 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6755c6bd-417c-469c-9e0b-b65078e35af8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jgsz2" podUID="6755c6bd-417c-469c-9e0b-b65078e35af8" Dec 13 07:38:34.044071 env[1313]: time="2024-12-13T07:38:34.042294758Z" level=info msg="StopPodSandbox for \"d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7\"" Dec 13 07:38:34.044071 env[1313]: time="2024-12-13T07:38:34.042514623Z" level=info msg="StopPodSandbox for \"5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb\"" Dec 13 07:38:34.162317 kubelet[2265]: I1213 07:38:34.162240 2265 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-zcx7b" podStartSLOduration=2.454003048 podStartE2EDuration="28.148147164s" podCreationTimestamp="2024-12-13 07:38:06 +0000 UTC" firstStartedPulling="2024-12-13 07:38:06.817311712 +0000 UTC m=+22.094606038" lastFinishedPulling="2024-12-13 07:38:32.511455822 +0000 UTC m=+47.788750154" observedRunningTime="2024-12-13 07:38:33.381852667 +0000 UTC m=+48.659147016" watchObservedRunningTime="2024-12-13 07:38:34.148147164 +0000 UTC m=+49.425441512" Dec 13 07:38:34.355389 env[1313]: 2024-12-13 07:38:34.149 [INFO][3506] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7" Dec 13 07:38:34.355389 env[1313]: 2024-12-13 07:38:34.149 [INFO][3506] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7" iface="eth0" netns="/var/run/netns/cni-c6958b85-3268-8775-5db0-cbea2818f590" Dec 13 07:38:34.355389 env[1313]: 2024-12-13 07:38:34.151 [INFO][3506] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7" iface="eth0" netns="/var/run/netns/cni-c6958b85-3268-8775-5db0-cbea2818f590" Dec 13 07:38:34.355389 env[1313]: 2024-12-13 07:38:34.161 [INFO][3506] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7" iface="eth0" netns="/var/run/netns/cni-c6958b85-3268-8775-5db0-cbea2818f590" Dec 13 07:38:34.355389 env[1313]: 2024-12-13 07:38:34.163 [INFO][3506] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7" Dec 13 07:38:34.355389 env[1313]: 2024-12-13 07:38:34.164 [INFO][3506] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7" Dec 13 07:38:34.355389 env[1313]: 2024-12-13 07:38:34.335 [INFO][3516] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7" HandleID="k8s-pod-network.d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7" Workload="srv--ktue8.gb1.brightbox.com-k8s-calico--apiserver--8888c77cc--mzdlq-eth0" Dec 13 07:38:34.355389 env[1313]: 2024-12-13 07:38:34.336 [INFO][3516] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 07:38:34.355389 env[1313]: 2024-12-13 07:38:34.336 [INFO][3516] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 07:38:34.355389 env[1313]: 2024-12-13 07:38:34.349 [WARNING][3516] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7" HandleID="k8s-pod-network.d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7" Workload="srv--ktue8.gb1.brightbox.com-k8s-calico--apiserver--8888c77cc--mzdlq-eth0" Dec 13 07:38:34.355389 env[1313]: 2024-12-13 07:38:34.349 [INFO][3516] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7" HandleID="k8s-pod-network.d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7" Workload="srv--ktue8.gb1.brightbox.com-k8s-calico--apiserver--8888c77cc--mzdlq-eth0" Dec 13 07:38:34.355389 env[1313]: 2024-12-13 07:38:34.350 [INFO][3516] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 07:38:34.355389 env[1313]: 2024-12-13 07:38:34.353 [INFO][3506] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7" Dec 13 07:38:34.364944 env[1313]: time="2024-12-13T07:38:34.362453609Z" level=info msg="TearDown network for sandbox \"d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7\" successfully" Dec 13 07:38:34.364944 env[1313]: time="2024-12-13T07:38:34.362512879Z" level=info msg="StopPodSandbox for \"d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7\" returns successfully" Dec 13 07:38:34.361059 systemd[1]: run-netns-cni\x2dc6958b85\x2d3268\x2d8775\x2d5db0\x2dcbea2818f590.mount: Deactivated successfully. Dec 13 07:38:34.366714 env[1313]: time="2024-12-13T07:38:34.366661149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8888c77cc-mzdlq,Uid:d8ba17ac-a602-4e90-9b1d-ab55d1e3cb01,Namespace:calico-apiserver,Attempt:1,}" Dec 13 07:38:34.377279 env[1313]: 2024-12-13 07:38:34.149 [INFO][3498] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb" Dec 13 07:38:34.377279 env[1313]: 2024-12-13 07:38:34.149 [INFO][3498] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb" iface="eth0" netns="/var/run/netns/cni-39ed5872-a2a7-0d83-4e8b-ca107138c1f1" Dec 13 07:38:34.377279 env[1313]: 2024-12-13 07:38:34.150 [INFO][3498] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb" iface="eth0" netns="/var/run/netns/cni-39ed5872-a2a7-0d83-4e8b-ca107138c1f1" Dec 13 07:38:34.377279 env[1313]: 2024-12-13 07:38:34.161 [INFO][3498] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb" iface="eth0" netns="/var/run/netns/cni-39ed5872-a2a7-0d83-4e8b-ca107138c1f1" Dec 13 07:38:34.377279 env[1313]: 2024-12-13 07:38:34.161 [INFO][3498] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb" Dec 13 07:38:34.377279 env[1313]: 2024-12-13 07:38:34.161 [INFO][3498] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb" Dec 13 07:38:34.377279 env[1313]: 2024-12-13 07:38:34.334 [INFO][3515] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb" HandleID="k8s-pod-network.5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb" Workload="srv--ktue8.gb1.brightbox.com-k8s-coredns--76f75df574--msfc4-eth0" Dec 13 07:38:34.377279 env[1313]: 2024-12-13 07:38:34.336 [INFO][3515] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 07:38:34.377279 env[1313]: 2024-12-13 07:38:34.350 [INFO][3515] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 07:38:34.377279 env[1313]: 2024-12-13 07:38:34.365 [WARNING][3515] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb" HandleID="k8s-pod-network.5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb" Workload="srv--ktue8.gb1.brightbox.com-k8s-coredns--76f75df574--msfc4-eth0" Dec 13 07:38:34.377279 env[1313]: 2024-12-13 07:38:34.366 [INFO][3515] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb" HandleID="k8s-pod-network.5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb" Workload="srv--ktue8.gb1.brightbox.com-k8s-coredns--76f75df574--msfc4-eth0" Dec 13 07:38:34.377279 env[1313]: 2024-12-13 07:38:34.369 [INFO][3515] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 07:38:34.377279 env[1313]: 2024-12-13 07:38:34.374 [INFO][3498] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb" Dec 13 07:38:34.384579 env[1313]: time="2024-12-13T07:38:34.382114261Z" level=info msg="TearDown network for sandbox \"5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb\" successfully" Dec 13 07:38:34.384579 env[1313]: time="2024-12-13T07:38:34.382168426Z" level=info msg="StopPodSandbox for \"5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb\" returns successfully" Dec 13 07:38:34.384579 env[1313]: time="2024-12-13T07:38:34.384256466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-msfc4,Uid:58a0fd67-8895-4d73-a1c6-6bb7d5fa543a,Namespace:kube-system,Attempt:1,}" Dec 13 07:38:34.380853 systemd[1]: run-netns-cni\x2d39ed5872\x2da2a7\x2d0d83\x2d4e8b\x2dca107138c1f1.mount: Deactivated successfully. Dec 13 07:38:34.708977 systemd-networkd[1082]: calicb89ca73ece: Link UP Dec 13 07:38:34.725001 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 07:38:34.725219 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calicb89ca73ece: link becomes ready Dec 13 07:38:34.725693 systemd-networkd[1082]: calicb89ca73ece: Gained carrier Dec 13 07:38:34.764590 env[1313]: 2024-12-13 07:38:34.477 [INFO][3535] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 07:38:34.764590 env[1313]: 2024-12-13 07:38:34.516 [INFO][3535] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--ktue8.gb1.brightbox.com-k8s-calico--apiserver--8888c77cc--mzdlq-eth0 calico-apiserver-8888c77cc- calico-apiserver d8ba17ac-a602-4e90-9b1d-ab55d1e3cb01 800 0 2024-12-13 07:38:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8888c77cc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-ktue8.gb1.brightbox.com calico-apiserver-8888c77cc-mzdlq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calicb89ca73ece [] []}} ContainerID="c43351c71239975c0ccf7fd8c810399aeb1b96a60904f13b3d26d77e8b0added" Namespace="calico-apiserver" Pod="calico-apiserver-8888c77cc-mzdlq" WorkloadEndpoint="srv--ktue8.gb1.brightbox.com-k8s-calico--apiserver--8888c77cc--mzdlq-" Dec 13 07:38:34.764590 env[1313]: 2024-12-13 07:38:34.516 [INFO][3535] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c43351c71239975c0ccf7fd8c810399aeb1b96a60904f13b3d26d77e8b0added" Namespace="calico-apiserver" Pod="calico-apiserver-8888c77cc-mzdlq" WorkloadEndpoint="srv--ktue8.gb1.brightbox.com-k8s-calico--apiserver--8888c77cc--mzdlq-eth0" Dec 13 07:38:34.764590 env[1313]: 2024-12-13 07:38:34.585 [INFO][3573] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c43351c71239975c0ccf7fd8c810399aeb1b96a60904f13b3d26d77e8b0added" HandleID="k8s-pod-network.c43351c71239975c0ccf7fd8c810399aeb1b96a60904f13b3d26d77e8b0added" Workload="srv--ktue8.gb1.brightbox.com-k8s-calico--apiserver--8888c77cc--mzdlq-eth0" Dec 13 07:38:34.764590 env[1313]: 2024-12-13 07:38:34.601 [INFO][3573] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c43351c71239975c0ccf7fd8c810399aeb1b96a60904f13b3d26d77e8b0added" HandleID="k8s-pod-network.c43351c71239975c0ccf7fd8c810399aeb1b96a60904f13b3d26d77e8b0added" Workload="srv--ktue8.gb1.brightbox.com-k8s-calico--apiserver--8888c77cc--mzdlq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000334d40), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-ktue8.gb1.brightbox.com", "pod":"calico-apiserver-8888c77cc-mzdlq", "timestamp":"2024-12-13 07:38:34.585079867 +0000 UTC"}, Hostname:"srv-ktue8.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 07:38:34.764590 env[1313]: 2024-12-13 07:38:34.601 [INFO][3573] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 07:38:34.764590 env[1313]: 2024-12-13 07:38:34.602 [INFO][3573] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 07:38:34.764590 env[1313]: 2024-12-13 07:38:34.602 [INFO][3573] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-ktue8.gb1.brightbox.com' Dec 13 07:38:34.764590 env[1313]: 2024-12-13 07:38:34.606 [INFO][3573] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c43351c71239975c0ccf7fd8c810399aeb1b96a60904f13b3d26d77e8b0added" host="srv-ktue8.gb1.brightbox.com" Dec 13 07:38:34.764590 env[1313]: 2024-12-13 07:38:34.619 [INFO][3573] ipam/ipam.go 372: Looking up existing affinities for host host="srv-ktue8.gb1.brightbox.com" Dec 13 07:38:34.764590 env[1313]: 2024-12-13 07:38:34.625 [INFO][3573] ipam/ipam.go 489: Trying affinity for 192.168.109.0/26 host="srv-ktue8.gb1.brightbox.com" Dec 13 07:38:34.764590 env[1313]: 2024-12-13 07:38:34.630 [INFO][3573] ipam/ipam.go 155: Attempting to load block cidr=192.168.109.0/26 host="srv-ktue8.gb1.brightbox.com" Dec 13 07:38:34.764590 env[1313]: 2024-12-13 07:38:34.634 [INFO][3573] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.109.0/26 host="srv-ktue8.gb1.brightbox.com" Dec 13 07:38:34.764590 env[1313]: 2024-12-13 07:38:34.634 [INFO][3573] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.109.0/26 handle="k8s-pod-network.c43351c71239975c0ccf7fd8c810399aeb1b96a60904f13b3d26d77e8b0added" host="srv-ktue8.gb1.brightbox.com" Dec 13 07:38:34.764590 env[1313]: 2024-12-13 07:38:34.636 [INFO][3573] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c43351c71239975c0ccf7fd8c810399aeb1b96a60904f13b3d26d77e8b0added Dec 13 07:38:34.764590 env[1313]: 2024-12-13 07:38:34.642 [INFO][3573] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.109.0/26 handle="k8s-pod-network.c43351c71239975c0ccf7fd8c810399aeb1b96a60904f13b3d26d77e8b0added" host="srv-ktue8.gb1.brightbox.com" Dec 13 07:38:34.764590 env[1313]: 2024-12-13 07:38:34.656 [INFO][3573] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.109.1/26] block=192.168.109.0/26 handle="k8s-pod-network.c43351c71239975c0ccf7fd8c810399aeb1b96a60904f13b3d26d77e8b0added" host="srv-ktue8.gb1.brightbox.com" Dec 13 07:38:34.764590 env[1313]: 2024-12-13 07:38:34.656 [INFO][3573] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.109.1/26] handle="k8s-pod-network.c43351c71239975c0ccf7fd8c810399aeb1b96a60904f13b3d26d77e8b0added" host="srv-ktue8.gb1.brightbox.com" Dec 13 07:38:34.764590 env[1313]: 2024-12-13 07:38:34.656 [INFO][3573] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 07:38:34.764590 env[1313]: 2024-12-13 07:38:34.656 [INFO][3573] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.109.1/26] IPv6=[] ContainerID="c43351c71239975c0ccf7fd8c810399aeb1b96a60904f13b3d26d77e8b0added" HandleID="k8s-pod-network.c43351c71239975c0ccf7fd8c810399aeb1b96a60904f13b3d26d77e8b0added" Workload="srv--ktue8.gb1.brightbox.com-k8s-calico--apiserver--8888c77cc--mzdlq-eth0" Dec 13 07:38:34.768659 env[1313]: 2024-12-13 07:38:34.667 [INFO][3535] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c43351c71239975c0ccf7fd8c810399aeb1b96a60904f13b3d26d77e8b0added" Namespace="calico-apiserver" Pod="calico-apiserver-8888c77cc-mzdlq" WorkloadEndpoint="srv--ktue8.gb1.brightbox.com-k8s-calico--apiserver--8888c77cc--mzdlq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ktue8.gb1.brightbox.com-k8s-calico--apiserver--8888c77cc--mzdlq-eth0", GenerateName:"calico-apiserver-8888c77cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"d8ba17ac-a602-4e90-9b1d-ab55d1e3cb01", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 7, 38, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8888c77cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ktue8.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-8888c77cc-mzdlq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicb89ca73ece", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 07:38:34.768659 env[1313]: 2024-12-13 07:38:34.667 [INFO][3535] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.109.1/32] ContainerID="c43351c71239975c0ccf7fd8c810399aeb1b96a60904f13b3d26d77e8b0added" Namespace="calico-apiserver" Pod="calico-apiserver-8888c77cc-mzdlq" WorkloadEndpoint="srv--ktue8.gb1.brightbox.com-k8s-calico--apiserver--8888c77cc--mzdlq-eth0" Dec 13 07:38:34.768659 env[1313]: 2024-12-13 07:38:34.667 [INFO][3535] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicb89ca73ece ContainerID="c43351c71239975c0ccf7fd8c810399aeb1b96a60904f13b3d26d77e8b0added" Namespace="calico-apiserver" Pod="calico-apiserver-8888c77cc-mzdlq" WorkloadEndpoint="srv--ktue8.gb1.brightbox.com-k8s-calico--apiserver--8888c77cc--mzdlq-eth0" Dec 13 07:38:34.768659 env[1313]: 2024-12-13 07:38:34.726 [INFO][3535] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c43351c71239975c0ccf7fd8c810399aeb1b96a60904f13b3d26d77e8b0added" Namespace="calico-apiserver" Pod="calico-apiserver-8888c77cc-mzdlq" WorkloadEndpoint="srv--ktue8.gb1.brightbox.com-k8s-calico--apiserver--8888c77cc--mzdlq-eth0" Dec 13 07:38:34.768659 env[1313]: 2024-12-13 07:38:34.728 [INFO][3535] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c43351c71239975c0ccf7fd8c810399aeb1b96a60904f13b3d26d77e8b0added" Namespace="calico-apiserver" Pod="calico-apiserver-8888c77cc-mzdlq" WorkloadEndpoint="srv--ktue8.gb1.brightbox.com-k8s-calico--apiserver--8888c77cc--mzdlq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ktue8.gb1.brightbox.com-k8s-calico--apiserver--8888c77cc--mzdlq-eth0", GenerateName:"calico-apiserver-8888c77cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"d8ba17ac-a602-4e90-9b1d-ab55d1e3cb01", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 7, 38, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8888c77cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ktue8.gb1.brightbox.com", ContainerID:"c43351c71239975c0ccf7fd8c810399aeb1b96a60904f13b3d26d77e8b0added", Pod:"calico-apiserver-8888c77cc-mzdlq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicb89ca73ece", MAC:"ba:27:1f:cd:84:6f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 07:38:34.768659 env[1313]: 2024-12-13 07:38:34.752 [INFO][3535] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c43351c71239975c0ccf7fd8c810399aeb1b96a60904f13b3d26d77e8b0added" Namespace="calico-apiserver" Pod="calico-apiserver-8888c77cc-mzdlq" WorkloadEndpoint="srv--ktue8.gb1.brightbox.com-k8s-calico--apiserver--8888c77cc--mzdlq-eth0" Dec 13 07:38:34.798005 systemd-networkd[1082]: calia6bb70ac1af: Link UP Dec 13 07:38:34.804413 systemd-networkd[1082]: calia6bb70ac1af: Gained carrier Dec 13 07:38:34.804988 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calia6bb70ac1af: link becomes ready Dec 13 07:38:34.842364 env[1313]: 2024-12-13 07:38:34.523 [INFO][3547] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 07:38:34.842364 env[1313]: 2024-12-13 07:38:34.548 [INFO][3547] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--ktue8.gb1.brightbox.com-k8s-coredns--76f75df574--msfc4-eth0 coredns-76f75df574- kube-system 58a0fd67-8895-4d73-a1c6-6bb7d5fa543a 801 0 2024-12-13 07:37:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-ktue8.gb1.brightbox.com coredns-76f75df574-msfc4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia6bb70ac1af [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="1bb6a5835ca29f4649c9fdf0be51d8bc802811478bcfd9119f105c2646b7abc3" Namespace="kube-system" Pod="coredns-76f75df574-msfc4" WorkloadEndpoint="srv--ktue8.gb1.brightbox.com-k8s-coredns--76f75df574--msfc4-" Dec 13 07:38:34.842364 env[1313]: 2024-12-13 07:38:34.548 [INFO][3547] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1bb6a5835ca29f4649c9fdf0be51d8bc802811478bcfd9119f105c2646b7abc3" Namespace="kube-system" Pod="coredns-76f75df574-msfc4" WorkloadEndpoint="srv--ktue8.gb1.brightbox.com-k8s-coredns--76f75df574--msfc4-eth0" Dec 13 07:38:34.842364 env[1313]: 2024-12-13 07:38:34.639 [INFO][3577] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1bb6a5835ca29f4649c9fdf0be51d8bc802811478bcfd9119f105c2646b7abc3" HandleID="k8s-pod-network.1bb6a5835ca29f4649c9fdf0be51d8bc802811478bcfd9119f105c2646b7abc3" Workload="srv--ktue8.gb1.brightbox.com-k8s-coredns--76f75df574--msfc4-eth0" Dec 13 07:38:34.842364 env[1313]: 2024-12-13 07:38:34.668 [INFO][3577] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1bb6a5835ca29f4649c9fdf0be51d8bc802811478bcfd9119f105c2646b7abc3" HandleID="k8s-pod-network.1bb6a5835ca29f4649c9fdf0be51d8bc802811478bcfd9119f105c2646b7abc3" Workload="srv--ktue8.gb1.brightbox.com-k8s-coredns--76f75df574--msfc4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290170), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-ktue8.gb1.brightbox.com", "pod":"coredns-76f75df574-msfc4", "timestamp":"2024-12-13 07:38:34.63948202 +0000 UTC"}, Hostname:"srv-ktue8.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 07:38:34.842364 env[1313]: 2024-12-13 07:38:34.668 [INFO][3577] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 07:38:34.842364 env[1313]: 2024-12-13 07:38:34.668 [INFO][3577] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 07:38:34.842364 env[1313]: 2024-12-13 07:38:34.668 [INFO][3577] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-ktue8.gb1.brightbox.com' Dec 13 07:38:34.842364 env[1313]: 2024-12-13 07:38:34.671 [INFO][3577] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1bb6a5835ca29f4649c9fdf0be51d8bc802811478bcfd9119f105c2646b7abc3" host="srv-ktue8.gb1.brightbox.com" Dec 13 07:38:34.842364 env[1313]: 2024-12-13 07:38:34.678 [INFO][3577] ipam/ipam.go 372: Looking up existing affinities for host host="srv-ktue8.gb1.brightbox.com" Dec 13 07:38:34.842364 env[1313]: 2024-12-13 07:38:34.685 [INFO][3577] ipam/ipam.go 489: Trying affinity for 192.168.109.0/26 host="srv-ktue8.gb1.brightbox.com" Dec 13 07:38:34.842364 env[1313]: 2024-12-13 07:38:34.687 [INFO][3577] ipam/ipam.go 155: Attempting to load block cidr=192.168.109.0/26 host="srv-ktue8.gb1.brightbox.com" Dec 13 07:38:34.842364 env[1313]: 2024-12-13 07:38:34.704 [INFO][3577] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.109.0/26 host="srv-ktue8.gb1.brightbox.com" Dec 13 07:38:34.842364 env[1313]: 2024-12-13 07:38:34.704 [INFO][3577] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.109.0/26 handle="k8s-pod-network.1bb6a5835ca29f4649c9fdf0be51d8bc802811478bcfd9119f105c2646b7abc3" host="srv-ktue8.gb1.brightbox.com" Dec 13 07:38:34.842364 env[1313]: 2024-12-13 07:38:34.707 [INFO][3577] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1bb6a5835ca29f4649c9fdf0be51d8bc802811478bcfd9119f105c2646b7abc3 Dec 13 07:38:34.842364 env[1313]: 2024-12-13 07:38:34.728 [INFO][3577] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.109.0/26 handle="k8s-pod-network.1bb6a5835ca29f4649c9fdf0be51d8bc802811478bcfd9119f105c2646b7abc3" host="srv-ktue8.gb1.brightbox.com" Dec 13 07:38:34.842364 env[1313]: 2024-12-13 07:38:34.774 [INFO][3577] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.109.2/26] block=192.168.109.0/26 handle="k8s-pod-network.1bb6a5835ca29f4649c9fdf0be51d8bc802811478bcfd9119f105c2646b7abc3" host="srv-ktue8.gb1.brightbox.com" Dec 13 07:38:34.842364 env[1313]: 2024-12-13 07:38:34.774 [INFO][3577] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.109.2/26] handle="k8s-pod-network.1bb6a5835ca29f4649c9fdf0be51d8bc802811478bcfd9119f105c2646b7abc3" host="srv-ktue8.gb1.brightbox.com" Dec 13 07:38:34.842364 env[1313]: 2024-12-13 07:38:34.774 [INFO][3577] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 07:38:34.842364 env[1313]: 2024-12-13 07:38:34.774 [INFO][3577] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.109.2/26] IPv6=[] ContainerID="1bb6a5835ca29f4649c9fdf0be51d8bc802811478bcfd9119f105c2646b7abc3" HandleID="k8s-pod-network.1bb6a5835ca29f4649c9fdf0be51d8bc802811478bcfd9119f105c2646b7abc3" Workload="srv--ktue8.gb1.brightbox.com-k8s-coredns--76f75df574--msfc4-eth0" Dec 13 07:38:34.843942 env[1313]: 2024-12-13 07:38:34.780 [INFO][3547] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1bb6a5835ca29f4649c9fdf0be51d8bc802811478bcfd9119f105c2646b7abc3" Namespace="kube-system" Pod="coredns-76f75df574-msfc4" WorkloadEndpoint="srv--ktue8.gb1.brightbox.com-k8s-coredns--76f75df574--msfc4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ktue8.gb1.brightbox.com-k8s-coredns--76f75df574--msfc4-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"58a0fd67-8895-4d73-a1c6-6bb7d5fa543a", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 7, 37, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ktue8.gb1.brightbox.com", ContainerID:"", Pod:"coredns-76f75df574-msfc4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia6bb70ac1af", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 07:38:34.843942 env[1313]: 2024-12-13 07:38:34.780 [INFO][3547] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.109.2/32] ContainerID="1bb6a5835ca29f4649c9fdf0be51d8bc802811478bcfd9119f105c2646b7abc3" Namespace="kube-system" Pod="coredns-76f75df574-msfc4" WorkloadEndpoint="srv--ktue8.gb1.brightbox.com-k8s-coredns--76f75df574--msfc4-eth0" Dec 13 07:38:34.843942 env[1313]: 2024-12-13 07:38:34.780 [INFO][3547] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia6bb70ac1af ContainerID="1bb6a5835ca29f4649c9fdf0be51d8bc802811478bcfd9119f105c2646b7abc3" Namespace="kube-system" Pod="coredns-76f75df574-msfc4" WorkloadEndpoint="srv--ktue8.gb1.brightbox.com-k8s-coredns--76f75df574--msfc4-eth0" Dec 13 07:38:34.843942 env[1313]: 2024-12-13 07:38:34.805 [INFO][3547] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1bb6a5835ca29f4649c9fdf0be51d8bc802811478bcfd9119f105c2646b7abc3" Namespace="kube-system" Pod="coredns-76f75df574-msfc4" WorkloadEndpoint="srv--ktue8.gb1.brightbox.com-k8s-coredns--76f75df574--msfc4-eth0" Dec 13 07:38:34.843942 env[1313]: 2024-12-13 07:38:34.811 [INFO][3547] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1bb6a5835ca29f4649c9fdf0be51d8bc802811478bcfd9119f105c2646b7abc3" Namespace="kube-system" Pod="coredns-76f75df574-msfc4" WorkloadEndpoint="srv--ktue8.gb1.brightbox.com-k8s-coredns--76f75df574--msfc4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ktue8.gb1.brightbox.com-k8s-coredns--76f75df574--msfc4-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"58a0fd67-8895-4d73-a1c6-6bb7d5fa543a", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 7, 37, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ktue8.gb1.brightbox.com", ContainerID:"1bb6a5835ca29f4649c9fdf0be51d8bc802811478bcfd9119f105c2646b7abc3", Pod:"coredns-76f75df574-msfc4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia6bb70ac1af", MAC:"4e:5e:a3:08:d0:34", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 07:38:34.843942 env[1313]: 2024-12-13 07:38:34.829 [INFO][3547] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1bb6a5835ca29f4649c9fdf0be51d8bc802811478bcfd9119f105c2646b7abc3" Namespace="kube-system" Pod="coredns-76f75df574-msfc4" WorkloadEndpoint="srv--ktue8.gb1.brightbox.com-k8s-coredns--76f75df574--msfc4-eth0" Dec 13 07:38:34.856960 env[1313]: time="2024-12-13T07:38:34.856813206Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 07:38:34.857301 env[1313]: time="2024-12-13T07:38:34.856975654Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 07:38:34.857301 env[1313]: time="2024-12-13T07:38:34.857029731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 07:38:34.872088 env[1313]: time="2024-12-13T07:38:34.857561318Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c43351c71239975c0ccf7fd8c810399aeb1b96a60904f13b3d26d77e8b0added pid=3642 runtime=io.containerd.runc.v2 Dec 13 07:38:34.948300 systemd[1]: run-containerd-runc-k8s.io-c43351c71239975c0ccf7fd8c810399aeb1b96a60904f13b3d26d77e8b0added-runc.tojVyB.mount: Deactivated successfully. Dec 13 07:38:34.960364 env[1313]: time="2024-12-13T07:38:34.960189284Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 07:38:34.960573 env[1313]: time="2024-12-13T07:38:34.960409161Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 07:38:34.960573 env[1313]: time="2024-12-13T07:38:34.960481286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 07:38:34.964099 env[1313]: time="2024-12-13T07:38:34.960948055Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1bb6a5835ca29f4649c9fdf0be51d8bc802811478bcfd9119f105c2646b7abc3 pid=3689 runtime=io.containerd.runc.v2 Dec 13 07:38:35.056618 env[1313]: time="2024-12-13T07:38:35.055317753Z" level=info msg="StopPodSandbox for \"df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5\"" Dec 13 07:38:35.113533 kernel: kauditd_printk_skb: 8 callbacks suppressed Dec 13 07:38:35.114174 kernel: audit: type=1400 audit(1734075515.102:293): avc: denied { write } for pid=3767 comm="tee" name="fd" dev="proc" ino=30361 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 07:38:35.102000 audit[3767]: AVC avc: denied { write } for pid=3767 comm="tee" name="fd" dev="proc" ino=30361 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 07:38:35.122950 kernel: audit: type=1400 audit(1734075515.108:294): avc: denied { write } for pid=3657 comm="tee" name="fd" dev="proc" ino=29451 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 07:38:35.108000 audit[3657]: AVC avc: denied { write } for pid=3657 comm="tee" name="fd" dev="proc" ino=29451 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 07:38:35.108000 audit[3657]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe28fcb9f6 a2=241 a3=1b6 items=1 ppid=3598 pid=3657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:35.134911 kernel: audit: type=1300 audit(1734075515.108:294): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe28fcb9f6 a2=241 a3=1b6 items=1 ppid=3598 pid=3657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:35.108000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Dec 13 07:38:35.146968 kernel: audit: type=1307 audit(1734075515.108:294): cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Dec 13 07:38:35.108000 audit: PATH item=0 name="/dev/fd/63" inode=30276 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:38:35.153930 kernel: audit: type=1302 audit(1734075515.108:294): item=0 name="/dev/fd/63" inode=30276 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:38:35.162922 kernel: audit: type=1327 audit(1734075515.108:294): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 07:38:35.108000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 07:38:35.113000 audit[3703]: AVC avc: denied { write } for pid=3703 comm="tee" name="fd" dev="proc" ino=29453 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 07:38:35.175951 kernel: audit: type=1400 audit(1734075515.113:295): avc: denied { write } for pid=3703 comm="tee" name="fd" dev="proc" ino=29453 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 07:38:35.189305 kernel: audit: type=1300 audit(1734075515.113:295): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe8f7f1a07 a2=241 a3=1b6 items=1 ppid=3597 pid=3703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:35.113000 audit[3703]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe8f7f1a07 a2=241 a3=1b6 items=1 ppid=3597 pid=3703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:35.194227 env[1313]: time="2024-12-13T07:38:35.194145958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-msfc4,Uid:58a0fd67-8895-4d73-a1c6-6bb7d5fa543a,Namespace:kube-system,Attempt:1,} returns sandbox id \"1bb6a5835ca29f4649c9fdf0be51d8bc802811478bcfd9119f105c2646b7abc3\"" Dec 13 07:38:35.113000 audit: CWD cwd="/etc/service/enabled/bird/log" Dec 13 07:38:35.203902 kernel: audit: type=1307 audit(1734075515.113:295): cwd="/etc/service/enabled/bird/log" Dec 13 07:38:35.113000 audit: PATH item=0 name="/dev/fd/63" inode=29340 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:38:35.212910 kernel: audit: type=1302 audit(1734075515.113:295): item=0 name="/dev/fd/63" inode=29340 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:38:35.113000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 07:38:35.114000 audit[3706]: AVC avc: denied { write } for pid=3706 comm="tee" name="fd" dev="proc" ino=29457 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 07:38:35.114000 audit[3706]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff0236fa06 a2=241 a3=1b6 items=1 ppid=3600 pid=3706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:35.114000 audit: CWD cwd="/etc/service/enabled/felix/log" Dec 13 07:38:35.114000 audit: PATH item=0 name="/dev/fd/63" inode=29341 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:38:35.114000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 07:38:35.102000 audit[3767]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffff70aa9f7 a2=241 a3=1b6 items=1 ppid=3604 pid=3767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:35.102000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Dec 13 07:38:35.102000 audit: PATH item=0 name="/dev/fd/63" inode=29448 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:38:35.102000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 07:38:35.138000 audit[3753]: AVC avc: denied { write } for pid=3753 comm="tee" name="fd" dev="proc" ino=30365 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 07:38:35.138000 audit[3753]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe37ab3a08 a2=241 a3=1b6 items=1 ppid=3607 pid=3753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:35.138000 audit: CWD cwd="/etc/service/enabled/cni/log" Dec 13 07:38:35.138000 audit: PATH item=0 name="/dev/fd/63" inode=29430 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:38:35.138000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 07:38:35.218768 env[1313]: time="2024-12-13T07:38:35.218697781Z" level=info msg="CreateContainer within sandbox \"1bb6a5835ca29f4649c9fdf0be51d8bc802811478bcfd9119f105c2646b7abc3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 07:38:35.159000 audit[3708]: AVC avc: denied { write } for pid=3708 comm="tee" name="fd" dev="proc" ino=30393 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 07:38:35.159000 audit[3708]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffec1cffa06 a2=241 a3=1b6 items=1 ppid=3599 pid=3708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:35.159000 audit: CWD cwd="/etc/service/enabled/bird6/log" Dec 13 07:38:35.159000 audit: PATH item=0 name="/dev/fd/63" inode=29344 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:38:35.159000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 07:38:35.204000 audit[3785]: AVC avc: denied { write } for pid=3785 comm="tee" name="fd" dev="proc" ino=29475 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 07:38:35.204000 audit[3785]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe4279ba06 a2=241 a3=1b6 items=1 ppid=3615 pid=3785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:35.204000 audit: CWD cwd="/etc/service/enabled/confd/log" Dec 13 07:38:35.204000 audit: PATH item=0 name="/dev/fd/63" inode=30395 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 07:38:35.204000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 07:38:35.332838 env[1313]: time="2024-12-13T07:38:35.331571837Z" level=info msg="CreateContainer within sandbox \"1bb6a5835ca29f4649c9fdf0be51d8bc802811478bcfd9119f105c2646b7abc3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c19ae1de47ed62767827c1575cea1ddccb1f4a0674a4c998127a33a27b1bf0a8\"" Dec 13 07:38:35.335866 env[1313]: time="2024-12-13T07:38:35.335572887Z" level=info msg="StartContainer for \"c19ae1de47ed62767827c1575cea1ddccb1f4a0674a4c998127a33a27b1bf0a8\"" Dec 13 07:38:35.450578 env[1313]: time="2024-12-13T07:38:35.450500560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8888c77cc-mzdlq,Uid:d8ba17ac-a602-4e90-9b1d-ab55d1e3cb01,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"c43351c71239975c0ccf7fd8c810399aeb1b96a60904f13b3d26d77e8b0added\"" Dec 13 07:38:35.474455 env[1313]: time="2024-12-13T07:38:35.474253780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 07:38:35.568345 env[1313]: 2024-12-13 07:38:35.419 [INFO][3758] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5" Dec 13 07:38:35.568345 env[1313]: 2024-12-13 07:38:35.419 [INFO][3758] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5" iface="eth0" netns="/var/run/netns/cni-af01d22f-7735-139b-5e0e-9a5ddd7d3e30" Dec 13 07:38:35.568345 env[1313]: 2024-12-13 07:38:35.420 [INFO][3758] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5" iface="eth0" netns="/var/run/netns/cni-af01d22f-7735-139b-5e0e-9a5ddd7d3e30" Dec 13 07:38:35.568345 env[1313]: 2024-12-13 07:38:35.420 [INFO][3758] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5" iface="eth0" netns="/var/run/netns/cni-af01d22f-7735-139b-5e0e-9a5ddd7d3e30" Dec 13 07:38:35.568345 env[1313]: 2024-12-13 07:38:35.420 [INFO][3758] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5" Dec 13 07:38:35.568345 env[1313]: 2024-12-13 07:38:35.420 [INFO][3758] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5" Dec 13 07:38:35.568345 env[1313]: 2024-12-13 07:38:35.544 [INFO][3813] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5" HandleID="k8s-pod-network.df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5" Workload="srv--ktue8.gb1.brightbox.com-k8s-coredns--76f75df574--cfqbg-eth0" Dec 13 07:38:35.568345 env[1313]: 2024-12-13 07:38:35.549 [INFO][3813] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 07:38:35.568345 env[1313]: 2024-12-13 07:38:35.549 [INFO][3813] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 07:38:35.568345 env[1313]: 2024-12-13 07:38:35.561 [WARNING][3813] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5" HandleID="k8s-pod-network.df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5" Workload="srv--ktue8.gb1.brightbox.com-k8s-coredns--76f75df574--cfqbg-eth0" Dec 13 07:38:35.568345 env[1313]: 2024-12-13 07:38:35.561 [INFO][3813] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5" HandleID="k8s-pod-network.df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5" Workload="srv--ktue8.gb1.brightbox.com-k8s-coredns--76f75df574--cfqbg-eth0" Dec 13 07:38:35.568345 env[1313]: 2024-12-13 07:38:35.564 [INFO][3813] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 07:38:35.568345 env[1313]: 2024-12-13 07:38:35.566 [INFO][3758] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5" Dec 13 07:38:35.572958 systemd[1]: run-netns-cni\x2daf01d22f\x2d7735\x2d139b\x2d5e0e\x2d9a5ddd7d3e30.mount: Deactivated successfully. Dec 13 07:38:35.581087 env[1313]: time="2024-12-13T07:38:35.581005255Z" level=info msg="TearDown network for sandbox \"df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5\" successfully" Dec 13 07:38:35.581284 env[1313]: time="2024-12-13T07:38:35.581249644Z" level=info msg="StopPodSandbox for \"df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5\" returns successfully" Dec 13 07:38:35.582614 env[1313]: time="2024-12-13T07:38:35.582573892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-cfqbg,Uid:7ce172f1-b87c-421e-99dc-6fd967b7a919,Namespace:kube-system,Attempt:1,}" Dec 13 07:38:35.674174 env[1313]: time="2024-12-13T07:38:35.674100081Z" level=info msg="StartContainer for \"c19ae1de47ed62767827c1575cea1ddccb1f4a0674a4c998127a33a27b1bf0a8\" returns successfully" Dec 13 07:38:35.887385 systemd-networkd[1082]: cali594ae20e984: Link UP Dec 13 07:38:35.891972 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 07:38:35.895795 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali594ae20e984: link becomes ready Dec 13 07:38:35.894815 systemd-networkd[1082]: cali594ae20e984: Gained carrier Dec 13 07:38:35.925296 env[1313]: 2024-12-13 07:38:35.716 [INFO][3845] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 07:38:35.925296 env[1313]: 2024-12-13 07:38:35.737 [INFO][3845] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--ktue8.gb1.brightbox.com-k8s-coredns--76f75df574--cfqbg-eth0 coredns-76f75df574- kube-system 7ce172f1-b87c-421e-99dc-6fd967b7a919 815 0 2024-12-13 07:37:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-ktue8.gb1.brightbox.com coredns-76f75df574-cfqbg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali594ae20e984 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c0b32b58b0ea5996503824f5e7727282a2158080cbb1bc3253968b8da3d07533" Namespace="kube-system" Pod="coredns-76f75df574-cfqbg" WorkloadEndpoint="srv--ktue8.gb1.brightbox.com-k8s-coredns--76f75df574--cfqbg-" Dec 13 07:38:35.925296 env[1313]: 2024-12-13 07:38:35.737 [INFO][3845] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c0b32b58b0ea5996503824f5e7727282a2158080cbb1bc3253968b8da3d07533" Namespace="kube-system" Pod="coredns-76f75df574-cfqbg" WorkloadEndpoint="srv--ktue8.gb1.brightbox.com-k8s-coredns--76f75df574--cfqbg-eth0" Dec 13 07:38:35.925296 env[1313]: 2024-12-13 07:38:35.801 [INFO][3860] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c0b32b58b0ea5996503824f5e7727282a2158080cbb1bc3253968b8da3d07533" HandleID="k8s-pod-network.c0b32b58b0ea5996503824f5e7727282a2158080cbb1bc3253968b8da3d07533" Workload="srv--ktue8.gb1.brightbox.com-k8s-coredns--76f75df574--cfqbg-eth0" Dec 13 07:38:35.925296 env[1313]: 2024-12-13 07:38:35.817 [INFO][3860] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c0b32b58b0ea5996503824f5e7727282a2158080cbb1bc3253968b8da3d07533" HandleID="k8s-pod-network.c0b32b58b0ea5996503824f5e7727282a2158080cbb1bc3253968b8da3d07533" Workload="srv--ktue8.gb1.brightbox.com-k8s-coredns--76f75df574--cfqbg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000291960), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-ktue8.gb1.brightbox.com", "pod":"coredns-76f75df574-cfqbg", "timestamp":"2024-12-13 07:38:35.801056059 +0000 UTC"}, Hostname:"srv-ktue8.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 07:38:35.925296 env[1313]: 2024-12-13 07:38:35.818 [INFO][3860] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 07:38:35.925296 env[1313]: 2024-12-13 07:38:35.818 [INFO][3860] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 07:38:35.925296 env[1313]: 2024-12-13 07:38:35.818 [INFO][3860] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-ktue8.gb1.brightbox.com' Dec 13 07:38:35.925296 env[1313]: 2024-12-13 07:38:35.822 [INFO][3860] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c0b32b58b0ea5996503824f5e7727282a2158080cbb1bc3253968b8da3d07533" host="srv-ktue8.gb1.brightbox.com" Dec 13 07:38:35.925296 env[1313]: 2024-12-13 07:38:35.829 [INFO][3860] ipam/ipam.go 372: Looking up existing affinities for host host="srv-ktue8.gb1.brightbox.com" Dec 13 07:38:35.925296 env[1313]: 2024-12-13 07:38:35.839 [INFO][3860] ipam/ipam.go 489: Trying affinity for 192.168.109.0/26 host="srv-ktue8.gb1.brightbox.com" Dec 13 07:38:35.925296 env[1313]: 2024-12-13 07:38:35.847 [INFO][3860] ipam/ipam.go 155: Attempting to load block cidr=192.168.109.0/26 host="srv-ktue8.gb1.brightbox.com" Dec 13 07:38:35.925296 env[1313]: 2024-12-13 07:38:35.852 [INFO][3860] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.109.0/26 host="srv-ktue8.gb1.brightbox.com" Dec 13 07:38:35.925296 env[1313]: 2024-12-13 07:38:35.852 [INFO][3860] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.109.0/26 handle="k8s-pod-network.c0b32b58b0ea5996503824f5e7727282a2158080cbb1bc3253968b8da3d07533" host="srv-ktue8.gb1.brightbox.com" Dec 13 07:38:35.925296 env[1313]: 2024-12-13 07:38:35.856 [INFO][3860] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c0b32b58b0ea5996503824f5e7727282a2158080cbb1bc3253968b8da3d07533 Dec 13 07:38:35.925296 env[1313]: 2024-12-13 07:38:35.864 [INFO][3860] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.109.0/26 handle="k8s-pod-network.c0b32b58b0ea5996503824f5e7727282a2158080cbb1bc3253968b8da3d07533" host="srv-ktue8.gb1.brightbox.com" Dec 13 07:38:35.925296 env[1313]: 2024-12-13 07:38:35.876 [INFO][3860] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.109.3/26] block=192.168.109.0/26 handle="k8s-pod-network.c0b32b58b0ea5996503824f5e7727282a2158080cbb1bc3253968b8da3d07533" host="srv-ktue8.gb1.brightbox.com" Dec 13 07:38:35.925296 env[1313]: 2024-12-13 07:38:35.876 [INFO][3860] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.109.3/26] handle="k8s-pod-network.c0b32b58b0ea5996503824f5e7727282a2158080cbb1bc3253968b8da3d07533" host="srv-ktue8.gb1.brightbox.com" Dec 13 07:38:35.925296 env[1313]: 2024-12-13 07:38:35.876 [INFO][3860] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 07:38:35.925296 env[1313]: 2024-12-13 07:38:35.876 [INFO][3860] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.109.3/26] IPv6=[] ContainerID="c0b32b58b0ea5996503824f5e7727282a2158080cbb1bc3253968b8da3d07533" HandleID="k8s-pod-network.c0b32b58b0ea5996503824f5e7727282a2158080cbb1bc3253968b8da3d07533" Workload="srv--ktue8.gb1.brightbox.com-k8s-coredns--76f75df574--cfqbg-eth0" Dec 13 07:38:35.929751 env[1313]: 2024-12-13 07:38:35.879 [INFO][3845] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c0b32b58b0ea5996503824f5e7727282a2158080cbb1bc3253968b8da3d07533" Namespace="kube-system" Pod="coredns-76f75df574-cfqbg" WorkloadEndpoint="srv--ktue8.gb1.brightbox.com-k8s-coredns--76f75df574--cfqbg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ktue8.gb1.brightbox.com-k8s-coredns--76f75df574--cfqbg-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"7ce172f1-b87c-421e-99dc-6fd967b7a919", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 7, 37, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ktue8.gb1.brightbox.com", ContainerID:"", Pod:"coredns-76f75df574-cfqbg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali594ae20e984", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 07:38:35.929751 env[1313]: 2024-12-13 07:38:35.879 [INFO][3845] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.109.3/32] ContainerID="c0b32b58b0ea5996503824f5e7727282a2158080cbb1bc3253968b8da3d07533" Namespace="kube-system" Pod="coredns-76f75df574-cfqbg" WorkloadEndpoint="srv--ktue8.gb1.brightbox.com-k8s-coredns--76f75df574--cfqbg-eth0" Dec 13 07:38:35.929751 env[1313]: 2024-12-13 07:38:35.879 [INFO][3845] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali594ae20e984 ContainerID="c0b32b58b0ea5996503824f5e7727282a2158080cbb1bc3253968b8da3d07533" Namespace="kube-system" Pod="coredns-76f75df574-cfqbg" WorkloadEndpoint="srv--ktue8.gb1.brightbox.com-k8s-coredns--76f75df574--cfqbg-eth0" Dec 13 07:38:35.929751 env[1313]: 2024-12-13 07:38:35.896 [INFO][3845] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c0b32b58b0ea5996503824f5e7727282a2158080cbb1bc3253968b8da3d07533" Namespace="kube-system" Pod="coredns-76f75df574-cfqbg" WorkloadEndpoint="srv--ktue8.gb1.brightbox.com-k8s-coredns--76f75df574--cfqbg-eth0" Dec 13 07:38:35.929751 env[1313]: 2024-12-13 07:38:35.896 [INFO][3845] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c0b32b58b0ea5996503824f5e7727282a2158080cbb1bc3253968b8da3d07533" Namespace="kube-system" Pod="coredns-76f75df574-cfqbg" WorkloadEndpoint="srv--ktue8.gb1.brightbox.com-k8s-coredns--76f75df574--cfqbg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ktue8.gb1.brightbox.com-k8s-coredns--76f75df574--cfqbg-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"7ce172f1-b87c-421e-99dc-6fd967b7a919", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 7, 37, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ktue8.gb1.brightbox.com", ContainerID:"c0b32b58b0ea5996503824f5e7727282a2158080cbb1bc3253968b8da3d07533", Pod:"coredns-76f75df574-cfqbg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali594ae20e984", MAC:"6e:47:ae:f2:e8:33", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 07:38:35.929751 env[1313]: 2024-12-13 07:38:35.917 [INFO][3845] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c0b32b58b0ea5996503824f5e7727282a2158080cbb1bc3253968b8da3d07533" Namespace="kube-system" Pod="coredns-76f75df574-cfqbg" WorkloadEndpoint="srv--ktue8.gb1.brightbox.com-k8s-coredns--76f75df574--cfqbg-eth0" Dec 13 07:38:35.964781 env[1313]: time="2024-12-13T07:38:35.964619258Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 07:38:35.965042 env[1313]: time="2024-12-13T07:38:35.964767057Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 07:38:35.965042 env[1313]: time="2024-12-13T07:38:35.964944385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 07:38:35.965487 env[1313]: time="2024-12-13T07:38:35.965428578Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c0b32b58b0ea5996503824f5e7727282a2158080cbb1bc3253968b8da3d07533 pid=3887 runtime=io.containerd.runc.v2 Dec 13 07:38:36.057179 systemd-networkd[1082]: calicb89ca73ece: Gained IPv6LL Dec 13 07:38:36.081802 env[1313]: time="2024-12-13T07:38:36.081729033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-cfqbg,Uid:7ce172f1-b87c-421e-99dc-6fd967b7a919,Namespace:kube-system,Attempt:1,} returns sandbox id \"c0b32b58b0ea5996503824f5e7727282a2158080cbb1bc3253968b8da3d07533\"" Dec 13 07:38:36.091747 env[1313]: time="2024-12-13T07:38:36.091681770Z" level=info msg="CreateContainer within sandbox \"c0b32b58b0ea5996503824f5e7727282a2158080cbb1bc3253968b8da3d07533\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 07:38:36.104026 env[1313]: time="2024-12-13T07:38:36.103967268Z" level=info msg="CreateContainer within sandbox \"c0b32b58b0ea5996503824f5e7727282a2158080cbb1bc3253968b8da3d07533\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1322f17457869d6b97ed5814d4a516a63fa2f13eec04e74dfe5ddf9cbb9f26c8\"" Dec 13 07:38:36.108175 env[1313]: time="2024-12-13T07:38:36.105347036Z" level=info msg="StartContainer for \"1322f17457869d6b97ed5814d4a516a63fa2f13eec04e74dfe5ddf9cbb9f26c8\"" Dec 13 07:38:36.173655 env[1313]: time="2024-12-13T07:38:36.171075051Z" level=info msg="StartContainer for \"1322f17457869d6b97ed5814d4a516a63fa2f13eec04e74dfe5ddf9cbb9f26c8\" returns successfully" Dec 13 07:38:36.353715 kubelet[2265]: I1213 07:38:36.353649 2265 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 07:38:36.366582 systemd-networkd[1082]: calia6bb70ac1af: Gained IPv6LL Dec 13 07:38:36.390726 kubelet[2265]: I1213 07:38:36.390665 2265 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-msfc4" podStartSLOduration=39.390526691 podStartE2EDuration="39.390526691s" podCreationTimestamp="2024-12-13 07:37:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 07:38:36.367306486 +0000 UTC m=+51.644600831" watchObservedRunningTime="2024-12-13 07:38:36.390526691 +0000 UTC m=+51.667821035" Dec 13 07:38:36.521559 kubelet[2265]: I1213 07:38:36.521374 2265 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-cfqbg" podStartSLOduration=39.52130729 podStartE2EDuration="39.52130729s" podCreationTimestamp="2024-12-13 07:37:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 07:38:36.438433063 +0000 UTC m=+51.715727409" watchObservedRunningTime="2024-12-13 07:38:36.52130729 +0000 UTC m=+51.798601633" Dec 13 07:38:36.587000 audit[3970]: NETFILTER_CFG table=filter:95 family=2 entries=15 op=nft_register_rule pid=3970 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 07:38:36.587000 audit[3970]: SYSCALL arch=c000003e syscall=46 success=yes exit=4420 a0=3 a1=7ffdcdaa1210 a2=0 a3=7ffdcdaa11fc items=0 ppid=2405 pid=3970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:36.587000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 07:38:36.621000 audit[3970]: NETFILTER_CFG table=nat:96 family=2 entries=45 op=nft_register_chain pid=3970 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 07:38:36.621000 audit[3970]: SYSCALL arch=c000003e syscall=46 success=yes exit=19092 a0=3 a1=7ffdcdaa1210 a2=0 a3=7ffdcdaa11fc items=0 ppid=2405 pid=3970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:36.621000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 07:38:36.716000 audit[3976]: NETFILTER_CFG table=filter:97 family=2 entries=11 op=nft_register_rule pid=3976 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 07:38:36.716000 audit[3976]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffd70115ce0 a2=0 a3=7ffd70115ccc items=0 ppid=2405 pid=3976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:36.716000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 07:38:36.733000 audit[3976]: NETFILTER_CFG table=nat:98 family=2 entries=25 op=nft_register_chain pid=3976 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 07:38:36.733000 audit[3976]: SYSCALL arch=c000003e syscall=46 success=yes exit=8580 a0=3 a1=7ffd70115ce0 a2=0 a3=7ffd70115ccc items=0 ppid=2405 pid=3976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:36.733000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 07:38:37.400000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.400000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.400000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.400000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.400000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.400000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.400000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.400000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.400000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.400000 audit: BPF prog-id=10 op=LOAD Dec 13 07:38:37.400000 audit[4020]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffef3c44030 a2=98 a3=3 items=0 ppid=3977 pid=4020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:37.400000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 07:38:37.403000 audit: BPF prog-id=10 op=UNLOAD Dec 13 07:38:37.405000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.405000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.405000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.405000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.405000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.405000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.405000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.405000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.405000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.405000 audit: BPF prog-id=11 op=LOAD Dec 13 07:38:37.405000 audit[4020]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffef3c43e10 a2=74 a3=540051 items=0 ppid=3977 pid=4020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:37.405000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 07:38:37.407000 audit: BPF prog-id=11 op=UNLOAD Dec 13 07:38:37.407000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.407000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.407000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.407000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.407000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.407000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.407000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.407000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.407000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.407000 audit: BPF prog-id=12 op=LOAD Dec 13 07:38:37.407000 audit[4020]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffef3c43e40 a2=94 a3=2 items=0 ppid=3977 pid=4020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:37.407000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 07:38:37.408000 audit: BPF prog-id=12 op=UNLOAD Dec 13 07:38:37.653000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.653000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.653000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.653000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.653000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.653000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.653000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.653000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.653000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.653000 audit: BPF prog-id=13 op=LOAD Dec 13 07:38:37.653000 audit[4020]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffef3c43d00 a2=40 a3=1 items=0 ppid=3977 pid=4020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:37.653000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 07:38:37.653000 audit: BPF prog-id=13 op=UNLOAD Dec 13 07:38:37.653000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.653000 audit[4020]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffef3c43dd0 a2=50 a3=7ffef3c43eb0 items=0 ppid=3977 pid=4020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:37.653000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 07:38:37.673000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.673000 audit[4020]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffef3c43d10 a2=28 a3=0 items=0 ppid=3977 pid=4020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:37.673000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 07:38:37.673000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.673000 audit[4020]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffef3c43d40 a2=28 a3=0 items=0 ppid=3977 pid=4020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:37.673000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 07:38:37.673000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.673000 audit[4020]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffef3c43c50 a2=28 a3=0 items=0 ppid=3977 pid=4020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:37.673000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 07:38:37.673000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.673000 audit[4020]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffef3c43d60 a2=28 a3=0 items=0 ppid=3977 pid=4020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:37.673000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 07:38:37.673000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.673000 audit[4020]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffef3c43d40 a2=28 a3=0 items=0 ppid=3977 pid=4020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:37.673000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 07:38:37.673000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.673000 audit[4020]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffef3c43d30 a2=28 a3=0 items=0 ppid=3977 pid=4020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:37.673000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 07:38:37.673000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.673000 audit[4020]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffef3c43d60 a2=28 a3=0 items=0 ppid=3977 pid=4020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:37.673000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 07:38:37.673000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.673000 audit[4020]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffef3c43d40 a2=28 a3=0 items=0 ppid=3977 pid=4020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:37.673000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 07:38:37.673000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.673000 audit[4020]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffef3c43d60 a2=28 a3=0 items=0 ppid=3977 pid=4020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:37.673000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 07:38:37.673000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.673000 audit[4020]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffef3c43d30 a2=28 a3=0 items=0 ppid=3977 pid=4020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:37.673000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 07:38:37.674000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.674000 audit[4020]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffef3c43da0 a2=28 a3=0 items=0 ppid=3977 pid=4020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:37.674000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 07:38:37.674000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.674000 audit[4020]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffef3c43b50 a2=50 a3=1 items=0 ppid=3977 pid=4020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:37.674000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 07:38:37.674000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.674000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.674000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.674000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.674000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.674000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.674000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.674000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.674000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.674000 audit: BPF prog-id=14 op=LOAD Dec 13 07:38:37.674000 audit[4020]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffef3c43b50 a2=94 a3=5 items=0 ppid=3977 pid=4020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:37.674000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 07:38:37.674000 audit: BPF prog-id=14 op=UNLOAD Dec 13 07:38:37.674000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.674000 audit[4020]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffef3c43c00 a2=50 a3=1 items=0 ppid=3977 pid=4020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:37.674000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 07:38:37.674000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.674000 audit[4020]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffef3c43d20 a2=4 a3=38 items=0 ppid=3977 pid=4020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:37.674000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 07:38:37.674000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.674000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.674000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.674000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.674000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.674000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.674000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.674000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.674000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.674000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.674000 audit[4020]: AVC avc: denied { confidentiality } for pid=4020 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 07:38:37.674000 audit[4020]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffef3c43d70 a2=94 a3=6 items=0 ppid=3977 pid=4020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:37.674000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 07:38:37.675000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.675000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.675000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.675000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.675000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.675000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.675000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.675000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.675000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.675000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.675000 audit[4020]: AVC avc: denied { confidentiality } for pid=4020 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 07:38:37.675000 audit[4020]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffef3c43520 a2=94 a3=83 items=0 ppid=3977 pid=4020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:37.675000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 07:38:37.675000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.675000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.675000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.675000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.675000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.675000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.675000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.675000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.675000 audit[4020]: AVC avc: denied { perfmon } for pid=4020 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.675000 audit[4020]: AVC avc: denied { bpf } for pid=4020 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.675000 audit[4020]: AVC avc: denied { confidentiality } for pid=4020 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 07:38:37.675000 audit[4020]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffef3c43520 a2=94 a3=83 items=0 ppid=3977 pid=4020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:37.675000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 07:38:37.692000 audit[4023]: AVC avc: denied { bpf } for pid=4023 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.692000 audit[4023]: AVC avc: denied { bpf } for pid=4023 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.692000 audit[4023]: AVC avc: denied { perfmon } for pid=4023 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.692000 audit[4023]: AVC avc: denied { perfmon } for pid=4023 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.692000 audit[4023]: AVC avc: denied { perfmon } for pid=4023 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.692000 audit[4023]: AVC avc: denied { perfmon } for pid=4023 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.692000 audit[4023]: AVC avc: denied { perfmon } for pid=4023 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.692000 audit[4023]: AVC avc: denied { bpf } for pid=4023 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.692000 audit[4023]: AVC avc: denied { bpf } for pid=4023 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.692000 audit: BPF prog-id=15 op=LOAD Dec 13 07:38:37.692000 audit[4023]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe8e3ef2f0 a2=98 a3=1999999999999999 items=0 ppid=3977 pid=4023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:37.692000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 07:38:37.692000 audit: BPF prog-id=15 op=UNLOAD Dec 13 07:38:37.692000 audit[4023]: AVC avc: denied { bpf } for pid=4023 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.692000 audit[4023]: AVC avc: denied { bpf } for pid=4023 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.692000 audit[4023]: AVC avc: denied { perfmon } for pid=4023 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.692000 audit[4023]: AVC avc: denied { perfmon } for pid=4023 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.692000 audit[4023]: AVC avc: denied { perfmon } for pid=4023 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.692000 audit[4023]: AVC avc: denied { perfmon } for pid=4023 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.692000 audit[4023]: AVC avc: denied { perfmon } for pid=4023 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.692000 audit[4023]: AVC avc: denied { bpf } for pid=4023 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.692000 audit[4023]: AVC avc: denied { bpf } for pid=4023 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.692000 audit: BPF prog-id=16 op=LOAD Dec 13 07:38:37.692000 audit[4023]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe8e3ef1d0 a2=74 a3=ffff items=0 ppid=3977 pid=4023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:37.692000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 07:38:37.692000 audit: BPF prog-id=16 op=UNLOAD Dec 13 07:38:37.692000 audit[4023]: AVC avc: denied { bpf } for pid=4023 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.692000 audit[4023]: AVC avc: denied { bpf } for pid=4023 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.692000 audit[4023]: AVC avc: denied { perfmon } for pid=4023 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.692000 audit[4023]: AVC avc: denied { perfmon } for pid=4023 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.692000 audit[4023]: AVC avc: denied { perfmon } for pid=4023 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.692000 audit[4023]: AVC avc: denied { perfmon } for pid=4023 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.692000 audit[4023]: AVC avc: denied { perfmon } for pid=4023 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.692000 audit[4023]: AVC avc: denied { bpf } for pid=4023 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.692000 audit[4023]: AVC avc: denied { bpf } for pid=4023 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:37.692000 audit: BPF prog-id=17 op=LOAD Dec 13 07:38:37.692000 audit[4023]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe8e3ef210 a2=40 a3=7ffe8e3ef3f0 items=0 ppid=3977 pid=4023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:37.692000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 07:38:37.692000 audit: BPF prog-id=17 op=UNLOAD Dec 13 07:38:37.828317 systemd-networkd[1082]: vxlan.calico: Link UP Dec 13 07:38:37.828336 systemd-networkd[1082]: vxlan.calico: Gained carrier Dec 13 07:38:37.837125 systemd-networkd[1082]: cali594ae20e984: Gained IPv6LL Dec 13 07:38:38.135000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.135000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.135000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.135000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.135000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.135000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.135000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.135000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.135000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.135000 audit: BPF prog-id=18 op=LOAD Dec 13 07:38:38.135000 audit[4050]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc490bb830 a2=98 a3=100 items=0 ppid=3977 pid=4050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:38.135000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 07:38:38.138000 audit: BPF prog-id=18 op=UNLOAD Dec 13 07:38:38.139000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.139000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.139000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.139000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.139000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.139000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.139000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.139000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.139000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.139000 audit: BPF prog-id=19 op=LOAD Dec 13 07:38:38.139000 audit[4050]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc490bb640 a2=74 a3=540051 items=0 ppid=3977 pid=4050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:38.139000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 07:38:38.139000 audit: BPF prog-id=19 op=UNLOAD Dec 13 07:38:38.139000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.139000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.139000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.139000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.139000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.139000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.139000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.139000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.139000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.139000 audit: BPF prog-id=20 op=LOAD Dec 13 07:38:38.139000 audit[4050]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc490bb670 a2=94 a3=2 items=0 ppid=3977 pid=4050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:38.139000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 07:38:38.139000 audit: BPF prog-id=20 op=UNLOAD Dec 13 07:38:38.139000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.139000 audit[4050]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc490bb540 a2=28 a3=0 items=0 ppid=3977 pid=4050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:38.139000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 07:38:38.139000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.139000 audit[4050]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc490bb570 a2=28 a3=0 items=0 ppid=3977 pid=4050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:38.139000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 07:38:38.139000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.139000 audit[4050]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc490bb480 a2=28 a3=0 items=0 ppid=3977 pid=4050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:38.139000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 07:38:38.140000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.140000 audit[4050]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc490bb590 a2=28 a3=0 items=0 ppid=3977 pid=4050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:38.140000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 07:38:38.140000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.140000 audit[4050]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc490bb570 a2=28 a3=0 items=0 ppid=3977 pid=4050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:38.140000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 07:38:38.140000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.140000 audit[4050]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc490bb560 a2=28 a3=0 items=0 ppid=3977 pid=4050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:38.140000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 07:38:38.140000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.140000 audit[4050]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc490bb590 a2=28 a3=0 items=0 ppid=3977 pid=4050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:38.140000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 07:38:38.140000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.140000 audit[4050]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc490bb570 a2=28 a3=0 items=0 ppid=3977 pid=4050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:38.140000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 07:38:38.140000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.140000 audit[4050]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc490bb590 a2=28 a3=0 items=0 ppid=3977 pid=4050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:38.140000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 07:38:38.140000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.140000 audit[4050]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc490bb560 a2=28 a3=0 items=0 ppid=3977 pid=4050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:38.140000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 07:38:38.140000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.140000 audit[4050]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc490bb5d0 a2=28 a3=0 items=0 ppid=3977 pid=4050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:38.140000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 07:38:38.140000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.140000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.140000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.140000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.140000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.140000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.140000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.140000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.140000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.140000 audit: BPF prog-id=21 op=LOAD Dec 13 07:38:38.140000 audit[4050]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc490bb440 a2=40 a3=0 items=0 ppid=3977 pid=4050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:38.140000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 07:38:38.141000 audit: BPF prog-id=21 op=UNLOAD Dec 13 07:38:38.142000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.142000 audit[4050]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffc490bb430 a2=50 a3=2800 items=0 ppid=3977 pid=4050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:38.142000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 07:38:38.142000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.142000 audit[4050]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffc490bb430 a2=50 a3=2800 items=0 ppid=3977 pid=4050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:38.142000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 07:38:38.142000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.142000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.142000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.142000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.142000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.142000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.142000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.142000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.142000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.142000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.142000 audit: BPF prog-id=22 op=LOAD Dec 13 07:38:38.142000 audit[4050]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc490bac50 a2=94 a3=2 items=0 ppid=3977 pid=4050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:38.142000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 07:38:38.142000 audit: BPF prog-id=22 op=UNLOAD Dec 13 07:38:38.142000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.142000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.142000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.142000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.142000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.142000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.142000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.142000 audit[4050]: AVC avc: denied { perfmon } for pid=4050 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.142000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.142000 audit[4050]: AVC avc: denied { bpf } for pid=4050 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.142000 audit: BPF prog-id=23 op=LOAD Dec 13 07:38:38.142000 audit[4050]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc490bad50 a2=94 a3=2d items=0 ppid=3977 pid=4050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:38.142000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 07:38:38.197000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.197000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.197000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.197000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.197000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.197000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.197000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.197000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.197000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.197000 audit: BPF prog-id=24 op=LOAD Dec 13 07:38:38.197000 audit[4054]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd7789e430 a2=98 a3=0 items=0 ppid=3977 pid=4054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:38.197000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 07:38:38.198000 audit: BPF prog-id=24 op=UNLOAD Dec 13 07:38:38.198000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.198000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.198000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.198000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.198000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.198000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.198000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.198000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.198000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.198000 audit: BPF prog-id=25 op=LOAD Dec 13 07:38:38.198000 audit[4054]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd7789e210 a2=74 a3=540051 items=0 ppid=3977 pid=4054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:38.198000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 07:38:38.198000 audit: BPF prog-id=25 op=UNLOAD Dec 13 07:38:38.198000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.198000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.198000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.198000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.198000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.198000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.198000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.198000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.198000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.198000 audit: BPF prog-id=26 op=LOAD Dec 13 07:38:38.198000 audit[4054]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd7789e240 a2=94 a3=2 items=0 ppid=3977 pid=4054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:38.198000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 07:38:38.198000 audit: BPF prog-id=26 op=UNLOAD Dec 13 07:38:38.420000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.420000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.420000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.420000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.420000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.420000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.420000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.420000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.420000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.420000 audit: BPF prog-id=27 op=LOAD Dec 13 07:38:38.420000 audit[4054]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd7789e100 a2=40 a3=1 items=0 ppid=3977 pid=4054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:38.420000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 07:38:38.420000 audit: BPF prog-id=27 op=UNLOAD Dec 13 07:38:38.420000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.420000 audit[4054]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffd7789e1d0 a2=50 a3=7ffd7789e2b0 items=0 ppid=3977 pid=4054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:38.420000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 07:38:38.436000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.436000 audit[4054]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd7789e110 a2=28 a3=0 items=0 ppid=3977 pid=4054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:38.436000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 07:38:38.436000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.436000 audit[4054]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd7789e140 a2=28 a3=0 items=0 ppid=3977 pid=4054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:38.436000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 07:38:38.437000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.437000 audit[4054]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd7789e050 a2=28 a3=0 items=0 ppid=3977 pid=4054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:38.437000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 07:38:38.437000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.437000 audit[4054]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd7789e160 a2=28 a3=0 items=0 ppid=3977 pid=4054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:38.437000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 07:38:38.438000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.438000 audit[4054]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd7789e140 a2=28 a3=0 items=0 ppid=3977 pid=4054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:38.438000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 07:38:38.438000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.438000 audit[4054]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd7789e130 a2=28 a3=0 items=0 ppid=3977 pid=4054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:38.438000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 07:38:38.438000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.438000 audit[4054]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd7789e160 a2=28 a3=0 items=0 ppid=3977 pid=4054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:38.438000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 07:38:38.438000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.438000 audit[4054]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd7789e140 a2=28 a3=0 items=0 ppid=3977 pid=4054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:38.438000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 07:38:38.438000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.438000 audit[4054]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd7789e160 a2=28 a3=0 items=0 ppid=3977 pid=4054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:38.438000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 07:38:38.438000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.438000 audit[4054]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd7789e130 a2=28 a3=0 items=0 ppid=3977 pid=4054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:38.438000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 07:38:38.438000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.438000 audit[4054]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd7789e1a0 a2=28 a3=0 items=0 ppid=3977 pid=4054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:38.438000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 07:38:38.438000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.438000 audit[4054]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffd7789df50 a2=50 a3=1 items=0 ppid=3977 pid=4054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:38.438000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 07:38:38.438000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.438000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.438000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.438000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.438000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.438000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.438000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.438000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.438000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.438000 audit: BPF prog-id=28 op=LOAD Dec 13 07:38:38.438000 audit[4054]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd7789df50 a2=94 a3=5 items=0 ppid=3977 pid=4054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:38.438000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 07:38:38.438000 audit: BPF prog-id=28 op=UNLOAD Dec 13 07:38:38.441000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.441000 audit[4054]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffd7789e000 a2=50 a3=1 items=0 ppid=3977 pid=4054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:38.441000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 07:38:38.441000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.441000 audit[4054]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffd7789e120 a2=4 a3=38 items=0 ppid=3977 pid=4054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:38.441000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 07:38:38.441000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.441000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.441000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.441000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.441000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.441000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.441000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.441000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.441000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.441000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.441000 audit[4054]: AVC avc: denied { confidentiality } for pid=4054 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 07:38:38.441000 audit[4054]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffd7789e170 a2=94 a3=6 items=0 ppid=3977 pid=4054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:38.441000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 07:38:38.441000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.441000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.441000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.441000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.441000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.441000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.441000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.441000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.441000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.441000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.441000 audit[4054]: AVC avc: denied { confidentiality } for pid=4054 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 07:38:38.441000 audit[4054]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffd7789d920 a2=94 a3=83 items=0 ppid=3977 pid=4054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:38.441000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 07:38:38.441000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.441000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.441000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.441000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.441000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.441000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.441000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.441000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.441000 audit[4054]: AVC avc: denied { perfmon } for pid=4054 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.441000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.441000 audit[4054]: AVC avc: denied { confidentiality } for pid=4054 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 07:38:38.441000 audit[4054]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffd7789d920 a2=94 a3=83 items=0 ppid=3977 pid=4054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:38.441000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 07:38:38.442000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.442000 audit[4054]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffd7789f360 a2=10 a3=f1f00800 items=0 ppid=3977 pid=4054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:38.442000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 07:38:38.443000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.443000 audit[4054]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffd7789f200 a2=10 a3=3 items=0 ppid=3977 pid=4054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:38.443000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 07:38:38.443000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.443000 audit[4054]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffd7789f1a0 a2=10 a3=3 items=0 ppid=3977 pid=4054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:38.443000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 07:38:38.443000 audit[4054]: AVC avc: denied { bpf } for pid=4054 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 07:38:38.443000 audit[4054]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffd7789f1a0 a2=10 a3=7 items=0 ppid=3977 pid=4054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:38.443000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 07:38:38.458000 audit: BPF prog-id=23 op=UNLOAD Dec 13 07:38:38.787000 audit[4094]: NETFILTER_CFG table=mangle:99 family=2 entries=16 op=nft_register_chain pid=4094 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 07:38:38.787000 audit[4094]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffcea957c00 a2=0 a3=7ffcea957bec items=0 ppid=3977 pid=4094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:38.787000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 07:38:38.802000 audit[4093]: NETFILTER_CFG table=nat:100 family=2 entries=15 op=nft_register_chain pid=4093 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 07:38:38.802000 audit[4093]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffd37ab5b10 a2=0 a3=7ffd37ab5afc items=0 ppid=3977 pid=4093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:38.802000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 07:38:38.806000 audit[4092]: NETFILTER_CFG table=raw:101 family=2 entries=21 op=nft_register_chain pid=4092 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 07:38:38.806000 audit[4092]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffd8e6e4ba0 a2=0 a3=7ffd8e6e4b8c items=0 ppid=3977 pid=4092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:38.806000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 07:38:38.842000 audit[4099]: NETFILTER_CFG table=filter:102 family=2 entries=127 op=nft_register_chain pid=4099 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 07:38:38.842000 audit[4099]: SYSCALL arch=c000003e syscall=46 success=yes exit=72316 a0=3 a1=7ffcb3450760 a2=0 a3=7ffcb345074c items=0 ppid=3977 pid=4099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:38.842000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 07:38:39.200265 systemd-networkd[1082]: vxlan.calico: Gained IPv6LL Dec 13 07:38:39.882152 env[1313]: time="2024-12-13T07:38:39.882045902Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:38:39.886801 env[1313]: time="2024-12-13T07:38:39.886755691Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:38:39.891720 env[1313]: time="2024-12-13T07:38:39.891670565Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:38:39.895053 env[1313]: time="2024-12-13T07:38:39.895001591Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:38:39.895971 env[1313]: time="2024-12-13T07:38:39.895772784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 07:38:39.902531 env[1313]: time="2024-12-13T07:38:39.902438064Z" level=info msg="CreateContainer within sandbox \"c43351c71239975c0ccf7fd8c810399aeb1b96a60904f13b3d26d77e8b0added\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 07:38:39.935435 env[1313]: time="2024-12-13T07:38:39.935353280Z" level=info msg="CreateContainer within sandbox \"c43351c71239975c0ccf7fd8c810399aeb1b96a60904f13b3d26d77e8b0added\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8536435c3549da813496c210e79c4d06acf990db6245261eef1c40ed6d1bf547\"" Dec 13 07:38:39.936339 env[1313]: time="2024-12-13T07:38:39.936253019Z" level=info msg="StartContainer for \"8536435c3549da813496c210e79c4d06acf990db6245261eef1c40ed6d1bf547\"" Dec 13 07:38:40.004569 systemd[1]: run-containerd-runc-k8s.io-8536435c3549da813496c210e79c4d06acf990db6245261eef1c40ed6d1bf547-runc.lqawHt.mount: Deactivated successfully. Dec 13 07:38:40.121093 env[1313]: time="2024-12-13T07:38:40.121033615Z" level=info msg="StartContainer for \"8536435c3549da813496c210e79c4d06acf990db6245261eef1c40ed6d1bf547\" returns successfully" Dec 13 07:38:40.477337 kernel: kauditd_printk_skb: 517 callbacks suppressed Dec 13 07:38:40.477605 kernel: audit: type=1325 audit(1734075520.471:399): table=filter:103 family=2 entries=10 op=nft_register_rule pid=4141 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 07:38:40.471000 audit[4141]: NETFILTER_CFG table=filter:103 family=2 entries=10 op=nft_register_rule pid=4141 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 07:38:40.471000 audit[4141]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffedb0b9b70 a2=0 a3=7ffedb0b9b5c items=0 ppid=2405 pid=4141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:40.490998 kernel: audit: type=1300 audit(1734075520.471:399): arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffedb0b9b70 a2=0 a3=7ffedb0b9b5c items=0 ppid=2405 pid=4141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:40.471000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 07:38:40.499044 kernel: audit: type=1327 audit(1734075520.471:399): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 07:38:40.499128 kernel: audit: type=1325 audit(1734075520.486:400): table=nat:104 family=2 entries=20 op=nft_register_rule pid=4141 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 07:38:40.486000 audit[4141]: NETFILTER_CFG table=nat:104 family=2 entries=20 op=nft_register_rule pid=4141 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 07:38:40.506910 kernel: audit: type=1300 audit(1734075520.486:400): arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffedb0b9b70 a2=0 a3=7ffedb0b9b5c items=0 ppid=2405 pid=4141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:40.486000 audit[4141]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffedb0b9b70 a2=0 a3=7ffedb0b9b5c items=0 ppid=2405 pid=4141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:40.486000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 07:38:40.511918 kernel: audit: type=1327 audit(1734075520.486:400): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 07:38:41.422088 kubelet[2265]: I1213 07:38:41.422034 2265 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 07:38:41.548403 systemd[1]: run-containerd-runc-k8s.io-f686aa7caff47cf7895fa8aec1a3132bcab42b03efa7a9dbef0ec320ee2de6cd-runc.pHjQny.mount: Deactivated successfully. Dec 13 07:38:41.699624 kubelet[2265]: I1213 07:38:41.699484 2265 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8888c77cc-mzdlq" podStartSLOduration=32.256407753 podStartE2EDuration="36.699386904s" podCreationTimestamp="2024-12-13 07:38:05 +0000 UTC" firstStartedPulling="2024-12-13 07:38:35.454555734 +0000 UTC m=+50.731850068" lastFinishedPulling="2024-12-13 07:38:39.897534877 +0000 UTC m=+55.174829219" observedRunningTime="2024-12-13 07:38:40.438877562 +0000 UTC m=+55.716171911" watchObservedRunningTime="2024-12-13 07:38:41.699386904 +0000 UTC m=+56.976681239" Dec 13 07:38:45.014028 env[1313]: time="2024-12-13T07:38:45.013299234Z" level=info msg="StopPodSandbox for \"d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7\"" Dec 13 07:38:45.042989 env[1313]: time="2024-12-13T07:38:45.042861696Z" level=info msg="StopPodSandbox for \"f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e\"" Dec 13 07:38:45.239669 env[1313]: 2024-12-13 07:38:45.124 [WARNING][4186] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ktue8.gb1.brightbox.com-k8s-calico--apiserver--8888c77cc--mzdlq-eth0", GenerateName:"calico-apiserver-8888c77cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"d8ba17ac-a602-4e90-9b1d-ab55d1e3cb01", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 7, 38, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8888c77cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ktue8.gb1.brightbox.com", ContainerID:"c43351c71239975c0ccf7fd8c810399aeb1b96a60904f13b3d26d77e8b0added", Pod:"calico-apiserver-8888c77cc-mzdlq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicb89ca73ece", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 07:38:45.239669 env[1313]: 2024-12-13 07:38:45.126 [INFO][4186] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7" Dec 13 07:38:45.239669 env[1313]: 2024-12-13 07:38:45.126 [INFO][4186] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7" iface="eth0" netns="" Dec 13 07:38:45.239669 env[1313]: 2024-12-13 07:38:45.126 [INFO][4186] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7" Dec 13 07:38:45.239669 env[1313]: 2024-12-13 07:38:45.126 [INFO][4186] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7" Dec 13 07:38:45.239669 env[1313]: 2024-12-13 07:38:45.219 [INFO][4215] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7" HandleID="k8s-pod-network.d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7" Workload="srv--ktue8.gb1.brightbox.com-k8s-calico--apiserver--8888c77cc--mzdlq-eth0" Dec 13 07:38:45.239669 env[1313]: 2024-12-13 07:38:45.219 [INFO][4215] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 07:38:45.239669 env[1313]: 2024-12-13 07:38:45.219 [INFO][4215] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 07:38:45.239669 env[1313]: 2024-12-13 07:38:45.232 [WARNING][4215] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7" HandleID="k8s-pod-network.d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7" Workload="srv--ktue8.gb1.brightbox.com-k8s-calico--apiserver--8888c77cc--mzdlq-eth0" Dec 13 07:38:45.239669 env[1313]: 2024-12-13 07:38:45.232 [INFO][4215] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7" HandleID="k8s-pod-network.d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7" Workload="srv--ktue8.gb1.brightbox.com-k8s-calico--apiserver--8888c77cc--mzdlq-eth0" Dec 13 07:38:45.239669 env[1313]: 2024-12-13 07:38:45.234 [INFO][4215] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 07:38:45.239669 env[1313]: 2024-12-13 07:38:45.236 [INFO][4186] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7" Dec 13 07:38:45.240728 env[1313]: time="2024-12-13T07:38:45.239695230Z" level=info msg="TearDown network for sandbox \"d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7\" successfully" Dec 13 07:38:45.240728 env[1313]: time="2024-12-13T07:38:45.239750199Z" level=info msg="StopPodSandbox for \"d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7\" returns successfully" Dec 13 07:38:45.248287 env[1313]: time="2024-12-13T07:38:45.248248452Z" level=info msg="RemovePodSandbox for \"d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7\"" Dec 13 07:38:45.248483 env[1313]: time="2024-12-13T07:38:45.248358537Z" level=info msg="Forcibly stopping sandbox \"d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7\"" Dec 13 07:38:45.258788 env[1313]: 2024-12-13 07:38:45.153 [INFO][4206] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e" Dec 13 07:38:45.258788 env[1313]: 2024-12-13 07:38:45.154 [INFO][4206] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e" iface="eth0" netns="/var/run/netns/cni-f34ae1b6-2d44-04da-6bc7-9dbf2780e249" Dec 13 07:38:45.258788 env[1313]: 2024-12-13 07:38:45.154 [INFO][4206] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e" iface="eth0" netns="/var/run/netns/cni-f34ae1b6-2d44-04da-6bc7-9dbf2780e249" Dec 13 07:38:45.258788 env[1313]: 2024-12-13 07:38:45.155 [INFO][4206] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e" iface="eth0" netns="/var/run/netns/cni-f34ae1b6-2d44-04da-6bc7-9dbf2780e249" Dec 13 07:38:45.258788 env[1313]: 2024-12-13 07:38:45.155 [INFO][4206] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e" Dec 13 07:38:45.258788 env[1313]: 2024-12-13 07:38:45.155 [INFO][4206] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e" Dec 13 07:38:45.258788 env[1313]: 2024-12-13 07:38:45.231 [INFO][4220] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e" HandleID="k8s-pod-network.f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e" Workload="srv--ktue8.gb1.brightbox.com-k8s-csi--node--driver--jgsz2-eth0" Dec 13 07:38:45.258788 env[1313]: 2024-12-13 07:38:45.231 [INFO][4220] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 07:38:45.258788 env[1313]: 2024-12-13 07:38:45.236 [INFO][4220] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 07:38:45.258788 env[1313]: 2024-12-13 07:38:45.252 [WARNING][4220] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e" HandleID="k8s-pod-network.f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e" Workload="srv--ktue8.gb1.brightbox.com-k8s-csi--node--driver--jgsz2-eth0" Dec 13 07:38:45.258788 env[1313]: 2024-12-13 07:38:45.252 [INFO][4220] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e" HandleID="k8s-pod-network.f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e" Workload="srv--ktue8.gb1.brightbox.com-k8s-csi--node--driver--jgsz2-eth0" Dec 13 07:38:45.258788 env[1313]: 2024-12-13 07:38:45.254 [INFO][4220] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 07:38:45.258788 env[1313]: 2024-12-13 07:38:45.256 [INFO][4206] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e" Dec 13 07:38:45.270282 env[1313]: time="2024-12-13T07:38:45.264040041Z" level=info msg="TearDown network for sandbox \"f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e\" successfully" Dec 13 07:38:45.270282 env[1313]: time="2024-12-13T07:38:45.264107966Z" level=info msg="StopPodSandbox for \"f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e\" returns successfully" Dec 13 07:38:45.266591 systemd[1]: run-netns-cni\x2df34ae1b6\x2d2d44\x2d04da\x2d6bc7\x2d9dbf2780e249.mount: Deactivated successfully. Dec 13 07:38:45.270940 env[1313]: time="2024-12-13T07:38:45.270579470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jgsz2,Uid:6755c6bd-417c-469c-9e0b-b65078e35af8,Namespace:calico-system,Attempt:1,}" Dec 13 07:38:45.460074 env[1313]: 2024-12-13 07:38:45.359 [WARNING][4241] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ktue8.gb1.brightbox.com-k8s-calico--apiserver--8888c77cc--mzdlq-eth0", GenerateName:"calico-apiserver-8888c77cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"d8ba17ac-a602-4e90-9b1d-ab55d1e3cb01", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 7, 38, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8888c77cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ktue8.gb1.brightbox.com", ContainerID:"c43351c71239975c0ccf7fd8c810399aeb1b96a60904f13b3d26d77e8b0added", Pod:"calico-apiserver-8888c77cc-mzdlq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicb89ca73ece", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 07:38:45.460074 env[1313]: 2024-12-13 07:38:45.360 [INFO][4241] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7" Dec 13 07:38:45.460074 env[1313]: 2024-12-13 07:38:45.360 [INFO][4241] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7" iface="eth0" netns="" Dec 13 07:38:45.460074 env[1313]: 2024-12-13 07:38:45.360 [INFO][4241] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7" Dec 13 07:38:45.460074 env[1313]: 2024-12-13 07:38:45.360 [INFO][4241] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7" Dec 13 07:38:45.460074 env[1313]: 2024-12-13 07:38:45.442 [INFO][4247] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7" HandleID="k8s-pod-network.d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7" Workload="srv--ktue8.gb1.brightbox.com-k8s-calico--apiserver--8888c77cc--mzdlq-eth0" Dec 13 07:38:45.460074 env[1313]: 2024-12-13 07:38:45.443 [INFO][4247] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 07:38:45.460074 env[1313]: 2024-12-13 07:38:45.443 [INFO][4247] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 07:38:45.460074 env[1313]: 2024-12-13 07:38:45.454 [WARNING][4247] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7" HandleID="k8s-pod-network.d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7" Workload="srv--ktue8.gb1.brightbox.com-k8s-calico--apiserver--8888c77cc--mzdlq-eth0" Dec 13 07:38:45.460074 env[1313]: 2024-12-13 07:38:45.454 [INFO][4247] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7" HandleID="k8s-pod-network.d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7" Workload="srv--ktue8.gb1.brightbox.com-k8s-calico--apiserver--8888c77cc--mzdlq-eth0" Dec 13 07:38:45.460074 env[1313]: 2024-12-13 07:38:45.456 [INFO][4247] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 07:38:45.460074 env[1313]: 2024-12-13 07:38:45.458 [INFO][4241] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7" Dec 13 07:38:45.461156 env[1313]: time="2024-12-13T07:38:45.461056370Z" level=info msg="TearDown network for sandbox \"d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7\" successfully" Dec 13 07:38:45.464811 env[1313]: time="2024-12-13T07:38:45.464765320Z" level=info msg="RemovePodSandbox \"d9c037e468016503e64e7fabc42ffc6e14dde911867e2872adbd2fec867e09e7\" returns successfully" Dec 13 07:38:45.465711 env[1313]: time="2024-12-13T07:38:45.465673701Z" level=info msg="StopPodSandbox for \"df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5\"" Dec 13 07:38:45.559393 systemd-networkd[1082]: cali65101985d45: Link UP Dec 13 07:38:45.566909 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 07:38:45.567022 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali65101985d45: link becomes ready Dec 13 07:38:45.569913 systemd-networkd[1082]: cali65101985d45: Gained carrier Dec 13 07:38:45.612746 env[1313]: 2024-12-13 07:38:45.439 [INFO][4249] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--ktue8.gb1.brightbox.com-k8s-csi--node--driver--jgsz2-eth0 csi-node-driver- calico-system 6755c6bd-417c-469c-9e0b-b65078e35af8 869 0 2024-12-13 07:38:06 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-ktue8.gb1.brightbox.com csi-node-driver-jgsz2 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali65101985d45 [] []}} ContainerID="0df636dae344507ab25598249209d9d823ca43b9c21d0d2fb1703588fa501747" Namespace="calico-system" Pod="csi-node-driver-jgsz2" WorkloadEndpoint="srv--ktue8.gb1.brightbox.com-k8s-csi--node--driver--jgsz2-" Dec 13 07:38:45.612746 env[1313]: 2024-12-13 07:38:45.439 [INFO][4249] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0df636dae344507ab25598249209d9d823ca43b9c21d0d2fb1703588fa501747" Namespace="calico-system" Pod="csi-node-driver-jgsz2" WorkloadEndpoint="srv--ktue8.gb1.brightbox.com-k8s-csi--node--driver--jgsz2-eth0" Dec 13 07:38:45.612746 env[1313]: 2024-12-13 07:38:45.494 [INFO][4264] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0df636dae344507ab25598249209d9d823ca43b9c21d0d2fb1703588fa501747" HandleID="k8s-pod-network.0df636dae344507ab25598249209d9d823ca43b9c21d0d2fb1703588fa501747" Workload="srv--ktue8.gb1.brightbox.com-k8s-csi--node--driver--jgsz2-eth0" Dec 13 07:38:45.612746 env[1313]: 2024-12-13 07:38:45.509 [INFO][4264] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0df636dae344507ab25598249209d9d823ca43b9c21d0d2fb1703588fa501747" HandleID="k8s-pod-network.0df636dae344507ab25598249209d9d823ca43b9c21d0d2fb1703588fa501747" Workload="srv--ktue8.gb1.brightbox.com-k8s-csi--node--driver--jgsz2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000310a50), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-ktue8.gb1.brightbox.com", "pod":"csi-node-driver-jgsz2", "timestamp":"2024-12-13 07:38:45.494389044 +0000 UTC"}, Hostname:"srv-ktue8.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 07:38:45.612746 env[1313]: 2024-12-13 07:38:45.509 [INFO][4264] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 07:38:45.612746 env[1313]: 2024-12-13 07:38:45.509 [INFO][4264] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 07:38:45.612746 env[1313]: 2024-12-13 07:38:45.509 [INFO][4264] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-ktue8.gb1.brightbox.com' Dec 13 07:38:45.612746 env[1313]: 2024-12-13 07:38:45.511 [INFO][4264] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0df636dae344507ab25598249209d9d823ca43b9c21d0d2fb1703588fa501747" host="srv-ktue8.gb1.brightbox.com" Dec 13 07:38:45.612746 env[1313]: 2024-12-13 07:38:45.517 [INFO][4264] ipam/ipam.go 372: Looking up existing affinities for host host="srv-ktue8.gb1.brightbox.com" Dec 13 07:38:45.612746 env[1313]: 2024-12-13 07:38:45.523 [INFO][4264] ipam/ipam.go 489: Trying affinity for 192.168.109.0/26 host="srv-ktue8.gb1.brightbox.com" Dec 13 07:38:45.612746 env[1313]: 2024-12-13 07:38:45.525 [INFO][4264] ipam/ipam.go 155: Attempting to load block cidr=192.168.109.0/26 host="srv-ktue8.gb1.brightbox.com" Dec 13 07:38:45.612746 env[1313]: 2024-12-13 07:38:45.528 [INFO][4264] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.109.0/26 host="srv-ktue8.gb1.brightbox.com" Dec 13 07:38:45.612746 env[1313]: 2024-12-13 07:38:45.528 [INFO][4264] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.109.0/26 handle="k8s-pod-network.0df636dae344507ab25598249209d9d823ca43b9c21d0d2fb1703588fa501747" host="srv-ktue8.gb1.brightbox.com" Dec 13 07:38:45.612746 env[1313]: 2024-12-13 07:38:45.530 [INFO][4264] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0df636dae344507ab25598249209d9d823ca43b9c21d0d2fb1703588fa501747 Dec 13 07:38:45.612746 env[1313]: 2024-12-13 07:38:45.539 [INFO][4264] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.109.0/26 handle="k8s-pod-network.0df636dae344507ab25598249209d9d823ca43b9c21d0d2fb1703588fa501747" host="srv-ktue8.gb1.brightbox.com" Dec 13 07:38:45.612746 env[1313]: 2024-12-13 07:38:45.549 [INFO][4264] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.109.4/26] block=192.168.109.0/26 handle="k8s-pod-network.0df636dae344507ab25598249209d9d823ca43b9c21d0d2fb1703588fa501747" host="srv-ktue8.gb1.brightbox.com" Dec 13 07:38:45.612746 env[1313]: 2024-12-13 07:38:45.549 [INFO][4264] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.109.4/26] handle="k8s-pod-network.0df636dae344507ab25598249209d9d823ca43b9c21d0d2fb1703588fa501747" host="srv-ktue8.gb1.brightbox.com" Dec 13 07:38:45.612746 env[1313]: 2024-12-13 07:38:45.549 [INFO][4264] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 07:38:45.612746 env[1313]: 2024-12-13 07:38:45.549 [INFO][4264] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.109.4/26] IPv6=[] ContainerID="0df636dae344507ab25598249209d9d823ca43b9c21d0d2fb1703588fa501747" HandleID="k8s-pod-network.0df636dae344507ab25598249209d9d823ca43b9c21d0d2fb1703588fa501747" Workload="srv--ktue8.gb1.brightbox.com-k8s-csi--node--driver--jgsz2-eth0" Dec 13 07:38:45.614758 env[1313]: 2024-12-13 07:38:45.553 [INFO][4249] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0df636dae344507ab25598249209d9d823ca43b9c21d0d2fb1703588fa501747" Namespace="calico-system" Pod="csi-node-driver-jgsz2" WorkloadEndpoint="srv--ktue8.gb1.brightbox.com-k8s-csi--node--driver--jgsz2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ktue8.gb1.brightbox.com-k8s-csi--node--driver--jgsz2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6755c6bd-417c-469c-9e0b-b65078e35af8", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 7, 38, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ktue8.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-jgsz2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.109.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali65101985d45", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 07:38:45.614758 env[1313]: 2024-12-13 07:38:45.553 [INFO][4249] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.109.4/32] ContainerID="0df636dae344507ab25598249209d9d823ca43b9c21d0d2fb1703588fa501747" Namespace="calico-system" Pod="csi-node-driver-jgsz2" WorkloadEndpoint="srv--ktue8.gb1.brightbox.com-k8s-csi--node--driver--jgsz2-eth0" Dec 13 07:38:45.614758 env[1313]: 2024-12-13 07:38:45.554 [INFO][4249] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali65101985d45 ContainerID="0df636dae344507ab25598249209d9d823ca43b9c21d0d2fb1703588fa501747" Namespace="calico-system" Pod="csi-node-driver-jgsz2" WorkloadEndpoint="srv--ktue8.gb1.brightbox.com-k8s-csi--node--driver--jgsz2-eth0" Dec 13 07:38:45.614758 env[1313]: 2024-12-13 07:38:45.571 [INFO][4249] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0df636dae344507ab25598249209d9d823ca43b9c21d0d2fb1703588fa501747" Namespace="calico-system" Pod="csi-node-driver-jgsz2" WorkloadEndpoint="srv--ktue8.gb1.brightbox.com-k8s-csi--node--driver--jgsz2-eth0" Dec 13 07:38:45.614758 env[1313]: 2024-12-13 07:38:45.572 [INFO][4249] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0df636dae344507ab25598249209d9d823ca43b9c21d0d2fb1703588fa501747" Namespace="calico-system" Pod="csi-node-driver-jgsz2" WorkloadEndpoint="srv--ktue8.gb1.brightbox.com-k8s-csi--node--driver--jgsz2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ktue8.gb1.brightbox.com-k8s-csi--node--driver--jgsz2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6755c6bd-417c-469c-9e0b-b65078e35af8", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 7, 38, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ktue8.gb1.brightbox.com", ContainerID:"0df636dae344507ab25598249209d9d823ca43b9c21d0d2fb1703588fa501747", Pod:"csi-node-driver-jgsz2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.109.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali65101985d45", MAC:"12:02:51:5d:1d:76", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 07:38:45.614758 env[1313]: 2024-12-13 07:38:45.609 [INFO][4249] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0df636dae344507ab25598249209d9d823ca43b9c21d0d2fb1703588fa501747" Namespace="calico-system" Pod="csi-node-driver-jgsz2" WorkloadEndpoint="srv--ktue8.gb1.brightbox.com-k8s-csi--node--driver--jgsz2-eth0" Dec 13 07:38:45.651343 env[1313]: time="2024-12-13T07:38:45.651195241Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 07:38:45.651752 env[1313]: time="2024-12-13T07:38:45.651679617Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 07:38:45.652196 env[1313]: time="2024-12-13T07:38:45.652107535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 07:38:45.657106 env[1313]: time="2024-12-13T07:38:45.656697027Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0df636dae344507ab25598249209d9d823ca43b9c21d0d2fb1703588fa501747 pid=4317 runtime=io.containerd.runc.v2 Dec 13 07:38:45.657000 audit[4325]: NETFILTER_CFG table=filter:105 family=2 entries=46 op=nft_register_chain pid=4325 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 07:38:45.666949 kernel: audit: type=1325 audit(1734075525.657:401): table=filter:105 family=2 entries=46 op=nft_register_chain pid=4325 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 07:38:45.657000 audit[4325]: SYSCALL arch=c000003e syscall=46 success=yes exit=22712 a0=3 a1=7ffe374f6ad0 a2=0 a3=7ffe374f6abc items=0 ppid=3977 pid=4325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:45.675903 kernel: audit: type=1300 audit(1734075525.657:401): arch=c000003e syscall=46 success=yes exit=22712 a0=3 a1=7ffe374f6ad0 a2=0 a3=7ffe374f6abc items=0 ppid=3977 pid=4325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:45.657000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 07:38:45.686903 kernel: audit: type=1327 audit(1734075525.657:401): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 07:38:45.740715 env[1313]: 2024-12-13 07:38:45.623 [WARNING][4285] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ktue8.gb1.brightbox.com-k8s-coredns--76f75df574--cfqbg-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"7ce172f1-b87c-421e-99dc-6fd967b7a919", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 7, 37, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ktue8.gb1.brightbox.com", ContainerID:"c0b32b58b0ea5996503824f5e7727282a2158080cbb1bc3253968b8da3d07533", Pod:"coredns-76f75df574-cfqbg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali594ae20e984", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 07:38:45.740715 env[1313]: 2024-12-13 07:38:45.623 [INFO][4285] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5" Dec 13 07:38:45.740715 env[1313]: 2024-12-13 07:38:45.623 [INFO][4285] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5" iface="eth0" netns="" Dec 13 07:38:45.740715 env[1313]: 2024-12-13 07:38:45.624 [INFO][4285] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5" Dec 13 07:38:45.740715 env[1313]: 2024-12-13 07:38:45.624 [INFO][4285] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5" Dec 13 07:38:45.740715 env[1313]: 2024-12-13 07:38:45.697 [INFO][4303] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5" HandleID="k8s-pod-network.df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5" Workload="srv--ktue8.gb1.brightbox.com-k8s-coredns--76f75df574--cfqbg-eth0" Dec 13 07:38:45.740715 env[1313]: 2024-12-13 07:38:45.701 [INFO][4303] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 07:38:45.740715 env[1313]: 2024-12-13 07:38:45.701 [INFO][4303] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 07:38:45.740715 env[1313]: 2024-12-13 07:38:45.709 [WARNING][4303] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5" HandleID="k8s-pod-network.df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5" Workload="srv--ktue8.gb1.brightbox.com-k8s-coredns--76f75df574--cfqbg-eth0" Dec 13 07:38:45.740715 env[1313]: 2024-12-13 07:38:45.709 [INFO][4303] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5" HandleID="k8s-pod-network.df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5" Workload="srv--ktue8.gb1.brightbox.com-k8s-coredns--76f75df574--cfqbg-eth0" Dec 13 07:38:45.740715 env[1313]: 2024-12-13 07:38:45.711 [INFO][4303] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 07:38:45.740715 env[1313]: 2024-12-13 07:38:45.722 [INFO][4285] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5" Dec 13 07:38:45.741948 env[1313]: time="2024-12-13T07:38:45.741864558Z" level=info msg="TearDown network for sandbox \"df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5\" successfully" Dec 13 07:38:45.742158 env[1313]: time="2024-12-13T07:38:45.742124306Z" level=info msg="StopPodSandbox for \"df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5\" returns successfully" Dec 13 07:38:45.746575 env[1313]: time="2024-12-13T07:38:45.746534776Z" level=info msg="RemovePodSandbox for \"df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5\"" Dec 13 07:38:45.747166 env[1313]: time="2024-12-13T07:38:45.747097099Z" level=info msg="Forcibly stopping sandbox \"df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5\"" Dec 13 07:38:45.844647 kubelet[2265]: E1213 07:38:45.844576 2265 cadvisor_stats_provider.go:501] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/besteffort/pod6755c6bd-417c-469c-9e0b-b65078e35af8/0df636dae344507ab25598249209d9d823ca43b9c21d0d2fb1703588fa501747\": RecentStats: unable to find data in memory cache]" Dec 13 07:38:45.863741 env[1313]: time="2024-12-13T07:38:45.863662188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jgsz2,Uid:6755c6bd-417c-469c-9e0b-b65078e35af8,Namespace:calico-system,Attempt:1,} returns sandbox id \"0df636dae344507ab25598249209d9d823ca43b9c21d0d2fb1703588fa501747\"" Dec 13 07:38:45.868242 env[1313]: time="2024-12-13T07:38:45.868205959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 07:38:45.951208 env[1313]: 2024-12-13 07:38:45.900 [WARNING][4364] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ktue8.gb1.brightbox.com-k8s-coredns--76f75df574--cfqbg-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"7ce172f1-b87c-421e-99dc-6fd967b7a919", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 7, 37, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ktue8.gb1.brightbox.com", ContainerID:"c0b32b58b0ea5996503824f5e7727282a2158080cbb1bc3253968b8da3d07533", Pod:"coredns-76f75df574-cfqbg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali594ae20e984", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 07:38:45.951208 env[1313]: 2024-12-13 07:38:45.901 [INFO][4364] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5" Dec 13 07:38:45.951208 env[1313]: 2024-12-13 07:38:45.901 [INFO][4364] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5" iface="eth0" netns="" Dec 13 07:38:45.951208 env[1313]: 2024-12-13 07:38:45.901 [INFO][4364] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5" Dec 13 07:38:45.951208 env[1313]: 2024-12-13 07:38:45.901 [INFO][4364] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5" Dec 13 07:38:45.951208 env[1313]: 2024-12-13 07:38:45.933 [INFO][4377] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5" HandleID="k8s-pod-network.df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5" Workload="srv--ktue8.gb1.brightbox.com-k8s-coredns--76f75df574--cfqbg-eth0" Dec 13 07:38:45.951208 env[1313]: 2024-12-13 07:38:45.933 [INFO][4377] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 07:38:45.951208 env[1313]: 2024-12-13 07:38:45.933 [INFO][4377] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 07:38:45.951208 env[1313]: 2024-12-13 07:38:45.945 [WARNING][4377] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5" HandleID="k8s-pod-network.df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5" Workload="srv--ktue8.gb1.brightbox.com-k8s-coredns--76f75df574--cfqbg-eth0" Dec 13 07:38:45.951208 env[1313]: 2024-12-13 07:38:45.945 [INFO][4377] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5" HandleID="k8s-pod-network.df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5" Workload="srv--ktue8.gb1.brightbox.com-k8s-coredns--76f75df574--cfqbg-eth0" Dec 13 07:38:45.951208 env[1313]: 2024-12-13 07:38:45.947 [INFO][4377] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 07:38:45.951208 env[1313]: 2024-12-13 07:38:45.949 [INFO][4364] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5" Dec 13 07:38:45.952267 env[1313]: time="2024-12-13T07:38:45.952213369Z" level=info msg="TearDown network for sandbox \"df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5\" successfully" Dec 13 07:38:45.955852 env[1313]: time="2024-12-13T07:38:45.955796303Z" level=info msg="RemovePodSandbox \"df65ac1466338b88150286f835f59d4d095b840d03622e56957e4d0e53a872c5\" returns successfully" Dec 13 07:38:45.956717 env[1313]: time="2024-12-13T07:38:45.956683459Z" level=info msg="StopPodSandbox for \"5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb\"" Dec 13 07:38:46.080286 env[1313]: 2024-12-13 07:38:46.027 [WARNING][4397] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ktue8.gb1.brightbox.com-k8s-coredns--76f75df574--msfc4-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"58a0fd67-8895-4d73-a1c6-6bb7d5fa543a", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 7, 37, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ktue8.gb1.brightbox.com", ContainerID:"1bb6a5835ca29f4649c9fdf0be51d8bc802811478bcfd9119f105c2646b7abc3", Pod:"coredns-76f75df574-msfc4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia6bb70ac1af", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 07:38:46.080286 env[1313]: 2024-12-13 07:38:46.028 [INFO][4397] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb" Dec 13 07:38:46.080286 env[1313]: 2024-12-13 07:38:46.028 [INFO][4397] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb" iface="eth0" netns="" Dec 13 07:38:46.080286 env[1313]: 2024-12-13 07:38:46.028 [INFO][4397] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb" Dec 13 07:38:46.080286 env[1313]: 2024-12-13 07:38:46.028 [INFO][4397] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb" Dec 13 07:38:46.080286 env[1313]: 2024-12-13 07:38:46.064 [INFO][4404] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb" HandleID="k8s-pod-network.5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb" Workload="srv--ktue8.gb1.brightbox.com-k8s-coredns--76f75df574--msfc4-eth0" Dec 13 07:38:46.080286 env[1313]: 2024-12-13 07:38:46.065 [INFO][4404] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 07:38:46.080286 env[1313]: 2024-12-13 07:38:46.065 [INFO][4404] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 07:38:46.080286 env[1313]: 2024-12-13 07:38:46.073 [WARNING][4404] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb" HandleID="k8s-pod-network.5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb" Workload="srv--ktue8.gb1.brightbox.com-k8s-coredns--76f75df574--msfc4-eth0" Dec 13 07:38:46.080286 env[1313]: 2024-12-13 07:38:46.074 [INFO][4404] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb" HandleID="k8s-pod-network.5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb" Workload="srv--ktue8.gb1.brightbox.com-k8s-coredns--76f75df574--msfc4-eth0" Dec 13 07:38:46.080286 env[1313]: 2024-12-13 07:38:46.076 [INFO][4404] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 07:38:46.080286 env[1313]: 2024-12-13 07:38:46.078 [INFO][4397] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb" Dec 13 07:38:46.082188 env[1313]: time="2024-12-13T07:38:46.082128128Z" level=info msg="TearDown network for sandbox \"5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb\" successfully" Dec 13 07:38:46.082321 env[1313]: time="2024-12-13T07:38:46.082289930Z" level=info msg="StopPodSandbox for \"5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb\" returns successfully" Dec 13 07:38:46.083332 env[1313]: time="2024-12-13T07:38:46.083244152Z" level=info msg="RemovePodSandbox for \"5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb\"" Dec 13 07:38:46.083506 env[1313]: time="2024-12-13T07:38:46.083395636Z" level=info msg="Forcibly stopping sandbox \"5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb\"" Dec 13 07:38:46.190585 env[1313]: 2024-12-13 07:38:46.140 [WARNING][4423] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ktue8.gb1.brightbox.com-k8s-coredns--76f75df574--msfc4-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"58a0fd67-8895-4d73-a1c6-6bb7d5fa543a", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 7, 37, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ktue8.gb1.brightbox.com", ContainerID:"1bb6a5835ca29f4649c9fdf0be51d8bc802811478bcfd9119f105c2646b7abc3", Pod:"coredns-76f75df574-msfc4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia6bb70ac1af", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 07:38:46.190585 env[1313]: 2024-12-13 07:38:46.140 [INFO][4423] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb" Dec 13 07:38:46.190585 env[1313]: 2024-12-13 07:38:46.140 [INFO][4423] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb" iface="eth0" netns="" Dec 13 07:38:46.190585 env[1313]: 2024-12-13 07:38:46.140 [INFO][4423] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb" Dec 13 07:38:46.190585 env[1313]: 2024-12-13 07:38:46.140 [INFO][4423] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb" Dec 13 07:38:46.190585 env[1313]: 2024-12-13 07:38:46.172 [INFO][4430] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb" HandleID="k8s-pod-network.5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb" Workload="srv--ktue8.gb1.brightbox.com-k8s-coredns--76f75df574--msfc4-eth0" Dec 13 07:38:46.190585 env[1313]: 2024-12-13 07:38:46.172 [INFO][4430] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 07:38:46.190585 env[1313]: 2024-12-13 07:38:46.172 [INFO][4430] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 07:38:46.190585 env[1313]: 2024-12-13 07:38:46.182 [WARNING][4430] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb" HandleID="k8s-pod-network.5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb" Workload="srv--ktue8.gb1.brightbox.com-k8s-coredns--76f75df574--msfc4-eth0" Dec 13 07:38:46.190585 env[1313]: 2024-12-13 07:38:46.182 [INFO][4430] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb" HandleID="k8s-pod-network.5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb" Workload="srv--ktue8.gb1.brightbox.com-k8s-coredns--76f75df574--msfc4-eth0" Dec 13 07:38:46.190585 env[1313]: 2024-12-13 07:38:46.184 [INFO][4430] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 07:38:46.190585 env[1313]: 2024-12-13 07:38:46.186 [INFO][4423] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb" Dec 13 07:38:46.190585 env[1313]: time="2024-12-13T07:38:46.189053287Z" level=info msg="TearDown network for sandbox \"5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb\" successfully" Dec 13 07:38:46.193066 env[1313]: time="2024-12-13T07:38:46.192960964Z" level=info msg="RemovePodSandbox \"5b88ef4d0c58e614c29a5e2b14606089b72a35d246f89f7c252604acd3eca4bb\" returns successfully" Dec 13 07:38:46.262889 systemd[1]: run-containerd-runc-k8s.io-0df636dae344507ab25598249209d9d823ca43b9c21d0d2fb1703588fa501747-runc.ONyEMM.mount: Deactivated successfully. Dec 13 07:38:46.926204 systemd-networkd[1082]: cali65101985d45: Gained IPv6LL Dec 13 07:38:47.045808 env[1313]: time="2024-12-13T07:38:47.045747244Z" level=info msg="StopPodSandbox for \"6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6\"" Dec 13 07:38:47.047238 env[1313]: time="2024-12-13T07:38:47.046589888Z" level=info msg="StopPodSandbox for \"0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28\"" Dec 13 07:38:47.335312 env[1313]: 2024-12-13 07:38:47.181 [INFO][4466] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6" Dec 13 07:38:47.335312 env[1313]: 2024-12-13 07:38:47.181 [INFO][4466] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6" iface="eth0" netns="/var/run/netns/cni-4e00bc92-4cd6-bcf3-db20-01d58547a5c1" Dec 13 07:38:47.335312 env[1313]: 2024-12-13 07:38:47.182 [INFO][4466] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6" iface="eth0" netns="/var/run/netns/cni-4e00bc92-4cd6-bcf3-db20-01d58547a5c1" Dec 13 07:38:47.335312 env[1313]: 2024-12-13 07:38:47.182 [INFO][4466] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6" iface="eth0" netns="/var/run/netns/cni-4e00bc92-4cd6-bcf3-db20-01d58547a5c1" Dec 13 07:38:47.335312 env[1313]: 2024-12-13 07:38:47.182 [INFO][4466] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6" Dec 13 07:38:47.335312 env[1313]: 2024-12-13 07:38:47.183 [INFO][4466] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6" Dec 13 07:38:47.335312 env[1313]: 2024-12-13 07:38:47.283 [INFO][4476] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6" HandleID="k8s-pod-network.6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6" Workload="srv--ktue8.gb1.brightbox.com-k8s-calico--apiserver--8888c77cc--8shn8-eth0" Dec 13 07:38:47.335312 env[1313]: 2024-12-13 07:38:47.284 [INFO][4476] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 07:38:47.335312 env[1313]: 2024-12-13 07:38:47.284 [INFO][4476] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 07:38:47.335312 env[1313]: 2024-12-13 07:38:47.315 [WARNING][4476] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6" HandleID="k8s-pod-network.6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6" Workload="srv--ktue8.gb1.brightbox.com-k8s-calico--apiserver--8888c77cc--8shn8-eth0" Dec 13 07:38:47.335312 env[1313]: 2024-12-13 07:38:47.315 [INFO][4476] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6" HandleID="k8s-pod-network.6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6" Workload="srv--ktue8.gb1.brightbox.com-k8s-calico--apiserver--8888c77cc--8shn8-eth0" Dec 13 07:38:47.335312 env[1313]: 2024-12-13 07:38:47.317 [INFO][4476] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 07:38:47.335312 env[1313]: 2024-12-13 07:38:47.323 [INFO][4466] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6" Dec 13 07:38:47.342496 systemd[1]: run-netns-cni\x2d4e00bc92\x2d4cd6\x2dbcf3\x2ddb20\x2d01d58547a5c1.mount: Deactivated successfully. Dec 13 07:38:47.345404 env[1313]: time="2024-12-13T07:38:47.345333035Z" level=info msg="TearDown network for sandbox \"6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6\" successfully" Dec 13 07:38:47.345605 env[1313]: time="2024-12-13T07:38:47.345568050Z" level=info msg="StopPodSandbox for \"6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6\" returns successfully" Dec 13 07:38:47.346918 env[1313]: time="2024-12-13T07:38:47.346861566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8888c77cc-8shn8,Uid:acbf9fc3-226f-42b6-886d-99d3e542c4e9,Namespace:calico-apiserver,Attempt:1,}" Dec 13 07:38:47.426072 env[1313]: 2024-12-13 07:38:47.258 [INFO][4467] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28" Dec 13 07:38:47.426072 env[1313]: 2024-12-13 07:38:47.258 [INFO][4467] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28" iface="eth0" netns="/var/run/netns/cni-60ebcc05-6533-5428-096f-0573fbc133d7" Dec 13 07:38:47.426072 env[1313]: 2024-12-13 07:38:47.258 [INFO][4467] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28" iface="eth0" netns="/var/run/netns/cni-60ebcc05-6533-5428-096f-0573fbc133d7" Dec 13 07:38:47.426072 env[1313]: 2024-12-13 07:38:47.259 [INFO][4467] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28" iface="eth0" netns="/var/run/netns/cni-60ebcc05-6533-5428-096f-0573fbc133d7" Dec 13 07:38:47.426072 env[1313]: 2024-12-13 07:38:47.259 [INFO][4467] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28" Dec 13 07:38:47.426072 env[1313]: 2024-12-13 07:38:47.259 [INFO][4467] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28" Dec 13 07:38:47.426072 env[1313]: 2024-12-13 07:38:47.382 [INFO][4483] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28" HandleID="k8s-pod-network.0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28" Workload="srv--ktue8.gb1.brightbox.com-k8s-calico--kube--controllers--7ccc67bff4--xdz9n-eth0" Dec 13 07:38:47.426072 env[1313]: 2024-12-13 07:38:47.382 [INFO][4483] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 07:38:47.426072 env[1313]: 2024-12-13 07:38:47.382 [INFO][4483] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 07:38:47.426072 env[1313]: 2024-12-13 07:38:47.397 [WARNING][4483] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28" HandleID="k8s-pod-network.0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28" Workload="srv--ktue8.gb1.brightbox.com-k8s-calico--kube--controllers--7ccc67bff4--xdz9n-eth0" Dec 13 07:38:47.426072 env[1313]: 2024-12-13 07:38:47.397 [INFO][4483] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28" HandleID="k8s-pod-network.0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28" Workload="srv--ktue8.gb1.brightbox.com-k8s-calico--kube--controllers--7ccc67bff4--xdz9n-eth0" Dec 13 07:38:47.426072 env[1313]: 2024-12-13 07:38:47.410 [INFO][4483] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 07:38:47.426072 env[1313]: 2024-12-13 07:38:47.415 [INFO][4467] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28" Dec 13 07:38:47.431499 systemd[1]: run-netns-cni\x2d60ebcc05\x2d6533\x2d5428\x2d096f\x2d0573fbc133d7.mount: Deactivated successfully. Dec 13 07:38:47.437090 env[1313]: time="2024-12-13T07:38:47.436952280Z" level=info msg="TearDown network for sandbox \"0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28\" successfully" Dec 13 07:38:47.437350 env[1313]: time="2024-12-13T07:38:47.437229350Z" level=info msg="StopPodSandbox for \"0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28\" returns successfully" Dec 13 07:38:47.448227 env[1313]: time="2024-12-13T07:38:47.445354124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7ccc67bff4-xdz9n,Uid:1bec6ab5-1944-4370-b6de-fe55870be674,Namespace:calico-system,Attempt:1,}" Dec 13 07:38:47.886349 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 07:38:47.886653 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali9b47b3ae4cd: link becomes ready Dec 13 07:38:47.881136 systemd-networkd[1082]: cali9b47b3ae4cd: Link UP Dec 13 07:38:47.884751 systemd-networkd[1082]: cali9b47b3ae4cd: Gained carrier Dec 13 07:38:47.972113 env[1313]: 2024-12-13 07:38:47.618 [INFO][4504] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--ktue8.gb1.brightbox.com-k8s-calico--kube--controllers--7ccc67bff4--xdz9n-eth0 calico-kube-controllers-7ccc67bff4- calico-system 1bec6ab5-1944-4370-b6de-fe55870be674 885 0 2024-12-13 07:38:06 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7ccc67bff4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-ktue8.gb1.brightbox.com calico-kube-controllers-7ccc67bff4-xdz9n eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali9b47b3ae4cd [] []}} ContainerID="c2acb483ec5c8f705787454e04fdd290cb12a78742f4306de4015389bf5e5681" Namespace="calico-system" Pod="calico-kube-controllers-7ccc67bff4-xdz9n" WorkloadEndpoint="srv--ktue8.gb1.brightbox.com-k8s-calico--kube--controllers--7ccc67bff4--xdz9n-" Dec 13 07:38:47.972113 env[1313]: 2024-12-13 07:38:47.618 [INFO][4504] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c2acb483ec5c8f705787454e04fdd290cb12a78742f4306de4015389bf5e5681" Namespace="calico-system" Pod="calico-kube-controllers-7ccc67bff4-xdz9n" WorkloadEndpoint="srv--ktue8.gb1.brightbox.com-k8s-calico--kube--controllers--7ccc67bff4--xdz9n-eth0" Dec 13 07:38:47.972113 env[1313]: 2024-12-13 07:38:47.760 [INFO][4517] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c2acb483ec5c8f705787454e04fdd290cb12a78742f4306de4015389bf5e5681" HandleID="k8s-pod-network.c2acb483ec5c8f705787454e04fdd290cb12a78742f4306de4015389bf5e5681" Workload="srv--ktue8.gb1.brightbox.com-k8s-calico--kube--controllers--7ccc67bff4--xdz9n-eth0" Dec 13 07:38:47.972113 env[1313]: 2024-12-13 07:38:47.784 [INFO][4517] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c2acb483ec5c8f705787454e04fdd290cb12a78742f4306de4015389bf5e5681" HandleID="k8s-pod-network.c2acb483ec5c8f705787454e04fdd290cb12a78742f4306de4015389bf5e5681" Workload="srv--ktue8.gb1.brightbox.com-k8s-calico--kube--controllers--7ccc67bff4--xdz9n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000311890), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-ktue8.gb1.brightbox.com", "pod":"calico-kube-controllers-7ccc67bff4-xdz9n", "timestamp":"2024-12-13 07:38:47.760704506 +0000 UTC"}, Hostname:"srv-ktue8.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 07:38:47.972113 env[1313]: 2024-12-13 07:38:47.784 [INFO][4517] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 07:38:47.972113 env[1313]: 2024-12-13 07:38:47.784 [INFO][4517] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 07:38:47.972113 env[1313]: 2024-12-13 07:38:47.784 [INFO][4517] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-ktue8.gb1.brightbox.com' Dec 13 07:38:47.972113 env[1313]: 2024-12-13 07:38:47.796 [INFO][4517] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c2acb483ec5c8f705787454e04fdd290cb12a78742f4306de4015389bf5e5681" host="srv-ktue8.gb1.brightbox.com" Dec 13 07:38:47.972113 env[1313]: 2024-12-13 07:38:47.805 [INFO][4517] ipam/ipam.go 372: Looking up existing affinities for host host="srv-ktue8.gb1.brightbox.com" Dec 13 07:38:47.972113 env[1313]: 2024-12-13 07:38:47.815 [INFO][4517] ipam/ipam.go 489: Trying affinity for 192.168.109.0/26 host="srv-ktue8.gb1.brightbox.com" Dec 13 07:38:47.972113 env[1313]: 2024-12-13 07:38:47.826 [INFO][4517] ipam/ipam.go 155: Attempting to load block cidr=192.168.109.0/26 host="srv-ktue8.gb1.brightbox.com" Dec 13 07:38:47.972113 env[1313]: 2024-12-13 07:38:47.835 [INFO][4517] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.109.0/26 host="srv-ktue8.gb1.brightbox.com" Dec 13 07:38:47.972113 env[1313]: 2024-12-13 07:38:47.835 [INFO][4517] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.109.0/26 handle="k8s-pod-network.c2acb483ec5c8f705787454e04fdd290cb12a78742f4306de4015389bf5e5681" host="srv-ktue8.gb1.brightbox.com" Dec 13 07:38:47.972113 env[1313]: 2024-12-13 07:38:47.845 [INFO][4517] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c2acb483ec5c8f705787454e04fdd290cb12a78742f4306de4015389bf5e5681 Dec 13 07:38:47.972113 env[1313]: 2024-12-13 07:38:47.853 [INFO][4517] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.109.0/26 handle="k8s-pod-network.c2acb483ec5c8f705787454e04fdd290cb12a78742f4306de4015389bf5e5681" host="srv-ktue8.gb1.brightbox.com" Dec 13 07:38:47.972113 env[1313]: 2024-12-13 07:38:47.862 [INFO][4517] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.109.5/26] block=192.168.109.0/26 handle="k8s-pod-network.c2acb483ec5c8f705787454e04fdd290cb12a78742f4306de4015389bf5e5681" host="srv-ktue8.gb1.brightbox.com" Dec 13 07:38:47.972113 env[1313]: 2024-12-13 07:38:47.862 [INFO][4517] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.109.5/26] handle="k8s-pod-network.c2acb483ec5c8f705787454e04fdd290cb12a78742f4306de4015389bf5e5681" host="srv-ktue8.gb1.brightbox.com" Dec 13 07:38:47.972113 env[1313]: 2024-12-13 07:38:47.862 [INFO][4517] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 07:38:47.972113 env[1313]: 2024-12-13 07:38:47.862 [INFO][4517] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.109.5/26] IPv6=[] ContainerID="c2acb483ec5c8f705787454e04fdd290cb12a78742f4306de4015389bf5e5681" HandleID="k8s-pod-network.c2acb483ec5c8f705787454e04fdd290cb12a78742f4306de4015389bf5e5681" Workload="srv--ktue8.gb1.brightbox.com-k8s-calico--kube--controllers--7ccc67bff4--xdz9n-eth0" Dec 13 07:38:47.974193 env[1313]: 2024-12-13 07:38:47.865 [INFO][4504] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c2acb483ec5c8f705787454e04fdd290cb12a78742f4306de4015389bf5e5681" Namespace="calico-system" Pod="calico-kube-controllers-7ccc67bff4-xdz9n" WorkloadEndpoint="srv--ktue8.gb1.brightbox.com-k8s-calico--kube--controllers--7ccc67bff4--xdz9n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ktue8.gb1.brightbox.com-k8s-calico--kube--controllers--7ccc67bff4--xdz9n-eth0", GenerateName:"calico-kube-controllers-7ccc67bff4-", Namespace:"calico-system", SelfLink:"", UID:"1bec6ab5-1944-4370-b6de-fe55870be674", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 7, 38, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7ccc67bff4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ktue8.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-7ccc67bff4-xdz9n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.109.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9b47b3ae4cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 07:38:47.974193 env[1313]: 2024-12-13 07:38:47.865 [INFO][4504] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.109.5/32] ContainerID="c2acb483ec5c8f705787454e04fdd290cb12a78742f4306de4015389bf5e5681" Namespace="calico-system" Pod="calico-kube-controllers-7ccc67bff4-xdz9n" WorkloadEndpoint="srv--ktue8.gb1.brightbox.com-k8s-calico--kube--controllers--7ccc67bff4--xdz9n-eth0" Dec 13 07:38:47.974193 env[1313]: 2024-12-13 07:38:47.865 [INFO][4504] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9b47b3ae4cd ContainerID="c2acb483ec5c8f705787454e04fdd290cb12a78742f4306de4015389bf5e5681" Namespace="calico-system" Pod="calico-kube-controllers-7ccc67bff4-xdz9n" WorkloadEndpoint="srv--ktue8.gb1.brightbox.com-k8s-calico--kube--controllers--7ccc67bff4--xdz9n-eth0" Dec 13 07:38:47.974193 env[1313]: 2024-12-13 07:38:47.886 [INFO][4504] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c2acb483ec5c8f705787454e04fdd290cb12a78742f4306de4015389bf5e5681" Namespace="calico-system" Pod="calico-kube-controllers-7ccc67bff4-xdz9n" WorkloadEndpoint="srv--ktue8.gb1.brightbox.com-k8s-calico--kube--controllers--7ccc67bff4--xdz9n-eth0" Dec 13 07:38:47.974193 env[1313]: 2024-12-13 07:38:47.920 [INFO][4504] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c2acb483ec5c8f705787454e04fdd290cb12a78742f4306de4015389bf5e5681" Namespace="calico-system" Pod="calico-kube-controllers-7ccc67bff4-xdz9n" WorkloadEndpoint="srv--ktue8.gb1.brightbox.com-k8s-calico--kube--controllers--7ccc67bff4--xdz9n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ktue8.gb1.brightbox.com-k8s-calico--kube--controllers--7ccc67bff4--xdz9n-eth0", GenerateName:"calico-kube-controllers-7ccc67bff4-", Namespace:"calico-system", SelfLink:"", UID:"1bec6ab5-1944-4370-b6de-fe55870be674", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 7, 38, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7ccc67bff4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ktue8.gb1.brightbox.com", ContainerID:"c2acb483ec5c8f705787454e04fdd290cb12a78742f4306de4015389bf5e5681", Pod:"calico-kube-controllers-7ccc67bff4-xdz9n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.109.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9b47b3ae4cd", MAC:"aa:e8:7c:b1:82:46", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 07:38:47.974193 env[1313]: 2024-12-13 07:38:47.965 [INFO][4504] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c2acb483ec5c8f705787454e04fdd290cb12a78742f4306de4015389bf5e5681" Namespace="calico-system" Pod="calico-kube-controllers-7ccc67bff4-xdz9n" WorkloadEndpoint="srv--ktue8.gb1.brightbox.com-k8s-calico--kube--controllers--7ccc67bff4--xdz9n-eth0" Dec 13 07:38:48.018699 systemd-networkd[1082]: cali779acc4d7b9: Link UP Dec 13 07:38:48.020224 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali779acc4d7b9: link becomes ready Dec 13 07:38:48.020048 systemd-networkd[1082]: cali779acc4d7b9: Gained carrier Dec 13 07:38:48.051943 kernel: audit: type=1325 audit(1734075528.045:402): table=filter:106 family=2 entries=46 op=nft_register_chain pid=4542 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 07:38:48.045000 audit[4542]: NETFILTER_CFG table=filter:106 family=2 entries=46 op=nft_register_chain pid=4542 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 07:38:48.045000 audit[4542]: SYSCALL arch=c000003e syscall=46 success=yes exit=22204 a0=3 a1=7ffdd7f0edd0 a2=0 a3=7ffdd7f0edbc items=0 ppid=3977 pid=4542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:48.063098 kernel: audit: type=1300 audit(1734075528.045:402): arch=c000003e syscall=46 success=yes exit=22204 a0=3 a1=7ffdd7f0edd0 a2=0 a3=7ffdd7f0edbc items=0 ppid=3977 pid=4542 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:48.045000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 07:38:48.070498 env[1313]: 2024-12-13 07:38:47.603 [INFO][4491] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--ktue8.gb1.brightbox.com-k8s-calico--apiserver--8888c77cc--8shn8-eth0 calico-apiserver-8888c77cc- calico-apiserver acbf9fc3-226f-42b6-886d-99d3e542c4e9 883 0 2024-12-13 07:38:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:8888c77cc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-ktue8.gb1.brightbox.com calico-apiserver-8888c77cc-8shn8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali779acc4d7b9 [] []}} ContainerID="44c1638d51e18865347f45a34898277e5096b397b9a07328b5121a83ed758b84" Namespace="calico-apiserver" Pod="calico-apiserver-8888c77cc-8shn8" WorkloadEndpoint="srv--ktue8.gb1.brightbox.com-k8s-calico--apiserver--8888c77cc--8shn8-" Dec 13 07:38:48.070498 env[1313]: 2024-12-13 07:38:47.603 [INFO][4491] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="44c1638d51e18865347f45a34898277e5096b397b9a07328b5121a83ed758b84" Namespace="calico-apiserver" Pod="calico-apiserver-8888c77cc-8shn8" WorkloadEndpoint="srv--ktue8.gb1.brightbox.com-k8s-calico--apiserver--8888c77cc--8shn8-eth0" Dec 13 07:38:48.070498 env[1313]: 2024-12-13 07:38:47.818 [INFO][4516] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="44c1638d51e18865347f45a34898277e5096b397b9a07328b5121a83ed758b84" HandleID="k8s-pod-network.44c1638d51e18865347f45a34898277e5096b397b9a07328b5121a83ed758b84" Workload="srv--ktue8.gb1.brightbox.com-k8s-calico--apiserver--8888c77cc--8shn8-eth0" Dec 13 07:38:48.070498 env[1313]: 2024-12-13 07:38:47.855 [INFO][4516] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="44c1638d51e18865347f45a34898277e5096b397b9a07328b5121a83ed758b84" HandleID="k8s-pod-network.44c1638d51e18865347f45a34898277e5096b397b9a07328b5121a83ed758b84" Workload="srv--ktue8.gb1.brightbox.com-k8s-calico--apiserver--8888c77cc--8shn8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000310af0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-ktue8.gb1.brightbox.com", "pod":"calico-apiserver-8888c77cc-8shn8", "timestamp":"2024-12-13 07:38:47.818543782 +0000 UTC"}, Hostname:"srv-ktue8.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 07:38:48.070498 env[1313]: 2024-12-13 07:38:47.855 [INFO][4516] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 07:38:48.070498 env[1313]: 2024-12-13 07:38:47.862 [INFO][4516] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 07:38:48.070498 env[1313]: 2024-12-13 07:38:47.862 [INFO][4516] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-ktue8.gb1.brightbox.com' Dec 13 07:38:48.070498 env[1313]: 2024-12-13 07:38:47.883 [INFO][4516] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.44c1638d51e18865347f45a34898277e5096b397b9a07328b5121a83ed758b84" host="srv-ktue8.gb1.brightbox.com" Dec 13 07:38:48.070498 env[1313]: 2024-12-13 07:38:47.925 [INFO][4516] ipam/ipam.go 372: Looking up existing affinities for host host="srv-ktue8.gb1.brightbox.com" Dec 13 07:38:48.070498 env[1313]: 2024-12-13 07:38:47.948 [INFO][4516] ipam/ipam.go 489: Trying affinity for 192.168.109.0/26 host="srv-ktue8.gb1.brightbox.com" Dec 13 07:38:48.070498 env[1313]: 2024-12-13 07:38:47.962 [INFO][4516] ipam/ipam.go 155: Attempting to load block cidr=192.168.109.0/26 host="srv-ktue8.gb1.brightbox.com" Dec 13 07:38:48.070498 env[1313]: 2024-12-13 07:38:47.968 [INFO][4516] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.109.0/26 host="srv-ktue8.gb1.brightbox.com" Dec 13 07:38:48.070498 env[1313]: 2024-12-13 07:38:47.968 [INFO][4516] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.109.0/26 handle="k8s-pod-network.44c1638d51e18865347f45a34898277e5096b397b9a07328b5121a83ed758b84" host="srv-ktue8.gb1.brightbox.com" Dec 13 07:38:48.070498 env[1313]: 2024-12-13 07:38:47.977 [INFO][4516] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.44c1638d51e18865347f45a34898277e5096b397b9a07328b5121a83ed758b84 Dec 13 07:38:48.070498 env[1313]: 2024-12-13 07:38:47.985 [INFO][4516] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.109.0/26 handle="k8s-pod-network.44c1638d51e18865347f45a34898277e5096b397b9a07328b5121a83ed758b84" host="srv-ktue8.gb1.brightbox.com" Dec 13 07:38:48.070498 env[1313]: 2024-12-13 07:38:47.997 [INFO][4516] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.109.6/26] block=192.168.109.0/26 handle="k8s-pod-network.44c1638d51e18865347f45a34898277e5096b397b9a07328b5121a83ed758b84" host="srv-ktue8.gb1.brightbox.com" Dec 13 07:38:48.070498 env[1313]: 2024-12-13 07:38:47.997 [INFO][4516] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.109.6/26] handle="k8s-pod-network.44c1638d51e18865347f45a34898277e5096b397b9a07328b5121a83ed758b84" host="srv-ktue8.gb1.brightbox.com" Dec 13 07:38:48.070498 env[1313]: 2024-12-13 07:38:47.997 [INFO][4516] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 07:38:48.070498 env[1313]: 2024-12-13 07:38:47.997 [INFO][4516] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.109.6/26] IPv6=[] ContainerID="44c1638d51e18865347f45a34898277e5096b397b9a07328b5121a83ed758b84" HandleID="k8s-pod-network.44c1638d51e18865347f45a34898277e5096b397b9a07328b5121a83ed758b84" Workload="srv--ktue8.gb1.brightbox.com-k8s-calico--apiserver--8888c77cc--8shn8-eth0" Dec 13 07:38:48.075436 kernel: audit: type=1327 audit(1734075528.045:402): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 07:38:48.075516 env[1313]: 2024-12-13 07:38:48.009 [INFO][4491] cni-plugin/k8s.go 386: Populated endpoint ContainerID="44c1638d51e18865347f45a34898277e5096b397b9a07328b5121a83ed758b84" Namespace="calico-apiserver" Pod="calico-apiserver-8888c77cc-8shn8" WorkloadEndpoint="srv--ktue8.gb1.brightbox.com-k8s-calico--apiserver--8888c77cc--8shn8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ktue8.gb1.brightbox.com-k8s-calico--apiserver--8888c77cc--8shn8-eth0", GenerateName:"calico-apiserver-8888c77cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"acbf9fc3-226f-42b6-886d-99d3e542c4e9", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 7, 38, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8888c77cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ktue8.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-8888c77cc-8shn8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali779acc4d7b9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 07:38:48.075516 env[1313]: 2024-12-13 07:38:48.009 [INFO][4491] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.109.6/32] ContainerID="44c1638d51e18865347f45a34898277e5096b397b9a07328b5121a83ed758b84" Namespace="calico-apiserver" Pod="calico-apiserver-8888c77cc-8shn8" WorkloadEndpoint="srv--ktue8.gb1.brightbox.com-k8s-calico--apiserver--8888c77cc--8shn8-eth0" Dec 13 07:38:48.075516 env[1313]: 2024-12-13 07:38:48.009 [INFO][4491] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali779acc4d7b9 ContainerID="44c1638d51e18865347f45a34898277e5096b397b9a07328b5121a83ed758b84" Namespace="calico-apiserver" Pod="calico-apiserver-8888c77cc-8shn8" WorkloadEndpoint="srv--ktue8.gb1.brightbox.com-k8s-calico--apiserver--8888c77cc--8shn8-eth0" Dec 13 07:38:48.075516 env[1313]: 2024-12-13 07:38:48.022 [INFO][4491] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="44c1638d51e18865347f45a34898277e5096b397b9a07328b5121a83ed758b84" Namespace="calico-apiserver" Pod="calico-apiserver-8888c77cc-8shn8" WorkloadEndpoint="srv--ktue8.gb1.brightbox.com-k8s-calico--apiserver--8888c77cc--8shn8-eth0" Dec 13 07:38:48.075516 env[1313]: 2024-12-13 07:38:48.022 [INFO][4491] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="44c1638d51e18865347f45a34898277e5096b397b9a07328b5121a83ed758b84" Namespace="calico-apiserver" Pod="calico-apiserver-8888c77cc-8shn8" WorkloadEndpoint="srv--ktue8.gb1.brightbox.com-k8s-calico--apiserver--8888c77cc--8shn8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ktue8.gb1.brightbox.com-k8s-calico--apiserver--8888c77cc--8shn8-eth0", GenerateName:"calico-apiserver-8888c77cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"acbf9fc3-226f-42b6-886d-99d3e542c4e9", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 7, 38, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8888c77cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ktue8.gb1.brightbox.com", ContainerID:"44c1638d51e18865347f45a34898277e5096b397b9a07328b5121a83ed758b84", Pod:"calico-apiserver-8888c77cc-8shn8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali779acc4d7b9", MAC:"b6:05:7a:be:b7:1e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 07:38:48.075516 env[1313]: 2024-12-13 07:38:48.045 [INFO][4491] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="44c1638d51e18865347f45a34898277e5096b397b9a07328b5121a83ed758b84" Namespace="calico-apiserver" Pod="calico-apiserver-8888c77cc-8shn8" WorkloadEndpoint="srv--ktue8.gb1.brightbox.com-k8s-calico--apiserver--8888c77cc--8shn8-eth0" Dec 13 07:38:48.121982 env[1313]: time="2024-12-13T07:38:48.119737019Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 07:38:48.121982 env[1313]: time="2024-12-13T07:38:48.120094552Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 07:38:48.121982 env[1313]: time="2024-12-13T07:38:48.120169455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 07:38:48.123233 env[1313]: time="2024-12-13T07:38:48.123046659Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c2acb483ec5c8f705787454e04fdd290cb12a78742f4306de4015389bf5e5681 pid=4556 runtime=io.containerd.runc.v2 Dec 13 07:38:48.141436 kernel: audit: type=1325 audit(1734075528.124:403): table=filter:107 family=2 entries=50 op=nft_register_chain pid=4562 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 07:38:48.141589 kernel: audit: type=1300 audit(1734075528.124:403): arch=c000003e syscall=46 success=yes exit=25080 a0=3 a1=7ffcdd76c7f0 a2=0 a3=7ffcdd76c7dc items=0 ppid=3977 pid=4562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:48.124000 audit[4562]: NETFILTER_CFG table=filter:107 family=2 entries=50 op=nft_register_chain pid=4562 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 07:38:48.124000 audit[4562]: SYSCALL arch=c000003e syscall=46 success=yes exit=25080 a0=3 a1=7ffcdd76c7f0 a2=0 a3=7ffcdd76c7dc items=0 ppid=3977 pid=4562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:48.151917 kernel: audit: type=1327 audit(1734075528.124:403): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 07:38:48.124000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 07:38:48.186073 env[1313]: time="2024-12-13T07:38:48.178665313Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 07:38:48.186073 env[1313]: time="2024-12-13T07:38:48.179054283Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 07:38:48.186073 env[1313]: time="2024-12-13T07:38:48.179207459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 07:38:48.186073 env[1313]: time="2024-12-13T07:38:48.181824840Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/44c1638d51e18865347f45a34898277e5096b397b9a07328b5121a83ed758b84 pid=4583 runtime=io.containerd.runc.v2 Dec 13 07:38:48.468658 env[1313]: time="2024-12-13T07:38:48.468577273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7ccc67bff4-xdz9n,Uid:1bec6ab5-1944-4370-b6de-fe55870be674,Namespace:calico-system,Attempt:1,} returns sandbox id \"c2acb483ec5c8f705787454e04fdd290cb12a78742f4306de4015389bf5e5681\"" Dec 13 07:38:48.550387 env[1313]: time="2024-12-13T07:38:48.550332344Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:38:48.553315 env[1313]: time="2024-12-13T07:38:48.553272561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8888c77cc-8shn8,Uid:acbf9fc3-226f-42b6-886d-99d3e542c4e9,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"44c1638d51e18865347f45a34898277e5096b397b9a07328b5121a83ed758b84\"" Dec 13 07:38:48.563914 env[1313]: time="2024-12-13T07:38:48.563783161Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:38:48.567283 env[1313]: time="2024-12-13T07:38:48.567235075Z" level=info msg="CreateContainer within sandbox \"44c1638d51e18865347f45a34898277e5096b397b9a07328b5121a83ed758b84\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 07:38:48.577917 env[1313]: time="2024-12-13T07:38:48.575670776Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:38:48.579049 env[1313]: time="2024-12-13T07:38:48.579003677Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:38:48.586912 env[1313]: time="2024-12-13T07:38:48.586573379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Dec 13 07:38:48.593312 env[1313]: time="2024-12-13T07:38:48.589816400Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 07:38:48.592124 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3469687217.mount: Deactivated successfully. Dec 13 07:38:48.598771 env[1313]: time="2024-12-13T07:38:48.598529579Z" level=info msg="CreateContainer within sandbox \"0df636dae344507ab25598249209d9d823ca43b9c21d0d2fb1703588fa501747\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 07:38:48.612920 env[1313]: time="2024-12-13T07:38:48.611915554Z" level=info msg="CreateContainer within sandbox \"44c1638d51e18865347f45a34898277e5096b397b9a07328b5121a83ed758b84\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"bc720df58c2b6c8ebaab98a73f5ed27f27d242776743ce57b246255dbc6c091b\"" Dec 13 07:38:48.614408 env[1313]: time="2024-12-13T07:38:48.614352186Z" level=info msg="StartContainer for \"bc720df58c2b6c8ebaab98a73f5ed27f27d242776743ce57b246255dbc6c091b\"" Dec 13 07:38:48.640635 env[1313]: time="2024-12-13T07:38:48.640568682Z" level=info msg="CreateContainer within sandbox \"0df636dae344507ab25598249209d9d823ca43b9c21d0d2fb1703588fa501747\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"2682818fa9f29b0ece4896374f100a45eab887e0796e15f3c5f0ac7bacc52221\"" Dec 13 07:38:48.642271 env[1313]: time="2024-12-13T07:38:48.642235689Z" level=info msg="StartContainer for \"2682818fa9f29b0ece4896374f100a45eab887e0796e15f3c5f0ac7bacc52221\"" Dec 13 07:38:48.844792 env[1313]: time="2024-12-13T07:38:48.844726059Z" level=info msg="StartContainer for \"2682818fa9f29b0ece4896374f100a45eab887e0796e15f3c5f0ac7bacc52221\" returns successfully" Dec 13 07:38:48.881505 env[1313]: time="2024-12-13T07:38:48.881430016Z" level=info msg="StartContainer for \"bc720df58c2b6c8ebaab98a73f5ed27f27d242776743ce57b246255dbc6c091b\" returns successfully" Dec 13 07:38:49.069395 systemd-networkd[1082]: cali9b47b3ae4cd: Gained IPv6LL Dec 13 07:38:49.346535 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount862192435.mount: Deactivated successfully. Dec 13 07:38:49.545000 audit[4715]: NETFILTER_CFG table=filter:108 family=2 entries=10 op=nft_register_rule pid=4715 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 07:38:49.552921 kernel: audit: type=1325 audit(1734075529.545:404): table=filter:108 family=2 entries=10 op=nft_register_rule pid=4715 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 07:38:49.545000 audit[4715]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffce4d7b060 a2=0 a3=7ffce4d7b04c items=0 ppid=2405 pid=4715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:49.545000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 07:38:49.556000 audit[4715]: NETFILTER_CFG table=nat:109 family=2 entries=20 op=nft_register_rule pid=4715 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 07:38:49.556000 audit[4715]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffce4d7b060 a2=0 a3=7ffce4d7b04c items=0 ppid=2405 pid=4715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:49.556000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 07:38:49.677187 systemd-networkd[1082]: cali779acc4d7b9: Gained IPv6LL Dec 13 07:38:50.496721 kubelet[2265]: I1213 07:38:50.495729 2265 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 07:38:52.369861 env[1313]: time="2024-12-13T07:38:52.369734614Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:38:52.374033 env[1313]: time="2024-12-13T07:38:52.373963444Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:38:52.375807 env[1313]: time="2024-12-13T07:38:52.375774206Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:38:52.378414 env[1313]: time="2024-12-13T07:38:52.378368704Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:38:52.378796 env[1313]: time="2024-12-13T07:38:52.378758801Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Dec 13 07:38:52.382332 env[1313]: time="2024-12-13T07:38:52.380910604Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 07:38:52.446130 env[1313]: time="2024-12-13T07:38:52.445984027Z" level=info msg="CreateContainer within sandbox \"c2acb483ec5c8f705787454e04fdd290cb12a78742f4306de4015389bf5e5681\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 07:38:52.468202 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount513599129.mount: Deactivated successfully. Dec 13 07:38:52.471037 env[1313]: time="2024-12-13T07:38:52.470978533Z" level=info msg="CreateContainer within sandbox \"c2acb483ec5c8f705787454e04fdd290cb12a78742f4306de4015389bf5e5681\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"cca6b830759fb782ef48ee50ead9e2ca1809a7438bf018251a99625cfa9b7924\"" Dec 13 07:38:52.474367 env[1313]: time="2024-12-13T07:38:52.474308920Z" level=info msg="StartContainer for \"cca6b830759fb782ef48ee50ead9e2ca1809a7438bf018251a99625cfa9b7924\"" Dec 13 07:38:52.649484 env[1313]: time="2024-12-13T07:38:52.649063894Z" level=info msg="StartContainer for \"cca6b830759fb782ef48ee50ead9e2ca1809a7438bf018251a99625cfa9b7924\" returns successfully" Dec 13 07:38:53.543532 kubelet[2265]: I1213 07:38:53.543045 2265 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-8888c77cc-8shn8" podStartSLOduration=48.542945697 podStartE2EDuration="48.542945697s" podCreationTimestamp="2024-12-13 07:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 07:38:49.493643003 +0000 UTC m=+64.770937369" watchObservedRunningTime="2024-12-13 07:38:53.542945697 +0000 UTC m=+68.820240038" Dec 13 07:38:53.563265 systemd[1]: run-containerd-runc-k8s.io-cca6b830759fb782ef48ee50ead9e2ca1809a7438bf018251a99625cfa9b7924-runc.gb7tFx.mount: Deactivated successfully. Dec 13 07:38:53.753995 kubelet[2265]: I1213 07:38:53.753920 2265 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7ccc67bff4-xdz9n" podStartSLOduration=43.850155017 podStartE2EDuration="47.753840322s" podCreationTimestamp="2024-12-13 07:38:06 +0000 UTC" firstStartedPulling="2024-12-13 07:38:48.476510407 +0000 UTC m=+63.753804741" lastFinishedPulling="2024-12-13 07:38:52.380195712 +0000 UTC m=+67.657490046" observedRunningTime="2024-12-13 07:38:53.543471443 +0000 UTC m=+68.820765799" watchObservedRunningTime="2024-12-13 07:38:53.753840322 +0000 UTC m=+69.031134663" Dec 13 07:38:54.384458 env[1313]: time="2024-12-13T07:38:54.384346437Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:38:54.390358 env[1313]: time="2024-12-13T07:38:54.390313601Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:38:54.398446 env[1313]: time="2024-12-13T07:38:54.398386197Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:38:54.406417 env[1313]: time="2024-12-13T07:38:54.406347669Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 07:38:54.410221 env[1313]: time="2024-12-13T07:38:54.407508938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Dec 13 07:38:54.412349 env[1313]: time="2024-12-13T07:38:54.412272082Z" level=info msg="CreateContainer within sandbox \"0df636dae344507ab25598249209d9d823ca43b9c21d0d2fb1703588fa501747\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 07:38:54.434579 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2951303494.mount: Deactivated successfully. Dec 13 07:38:54.438511 env[1313]: time="2024-12-13T07:38:54.438451469Z" level=info msg="CreateContainer within sandbox \"0df636dae344507ab25598249209d9d823ca43b9c21d0d2fb1703588fa501747\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"68b34cfa65ab80c78cc06edbba9f535b816bac99eabbb28f24ecc496ff03d4fb\"" Dec 13 07:38:54.441481 env[1313]: time="2024-12-13T07:38:54.441444439Z" level=info msg="StartContainer for \"68b34cfa65ab80c78cc06edbba9f535b816bac99eabbb28f24ecc496ff03d4fb\"" Dec 13 07:38:54.468835 kubelet[2265]: I1213 07:38:54.468780 2265 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 07:38:54.600000 audit[4800]: NETFILTER_CFG table=filter:110 family=2 entries=9 op=nft_register_rule pid=4800 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 07:38:54.607465 kernel: kauditd_printk_skb: 5 callbacks suppressed Dec 13 07:38:54.611331 kernel: audit: type=1325 audit(1734075534.600:406): table=filter:110 family=2 entries=9 op=nft_register_rule pid=4800 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 07:38:54.611760 kernel: audit: type=1300 audit(1734075534.600:406): arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffde679b110 a2=0 a3=7ffde679b0fc items=0 ppid=2405 pid=4800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:54.600000 audit[4800]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffde679b110 a2=0 a3=7ffde679b0fc items=0 ppid=2405 pid=4800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:54.600000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 07:38:54.622175 kernel: audit: type=1327 audit(1734075534.600:406): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 07:38:54.622000 audit[4800]: NETFILTER_CFG table=nat:111 family=2 entries=27 op=nft_register_chain pid=4800 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 07:38:54.631823 kernel: audit: type=1325 audit(1734075534.622:407): table=nat:111 family=2 entries=27 op=nft_register_chain pid=4800 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 07:38:54.622000 audit[4800]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffde679b110 a2=0 a3=7ffde679b0fc items=0 ppid=2405 pid=4800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:54.642039 kernel: audit: type=1300 audit(1734075534.622:407): arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffde679b110 a2=0 a3=7ffde679b0fc items=0 ppid=2405 pid=4800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:38:54.622000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 07:38:54.653011 kernel: audit: type=1327 audit(1734075534.622:407): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 07:38:54.745934 env[1313]: time="2024-12-13T07:38:54.745795488Z" level=info msg="StartContainer for \"68b34cfa65ab80c78cc06edbba9f535b816bac99eabbb28f24ecc496ff03d4fb\" returns successfully" Dec 13 07:38:55.352494 kubelet[2265]: I1213 07:38:55.352399 2265 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 07:38:55.353254 kubelet[2265]: I1213 07:38:55.353230 2265 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 07:38:55.428442 systemd[1]: run-containerd-runc-k8s.io-68b34cfa65ab80c78cc06edbba9f535b816bac99eabbb28f24ecc496ff03d4fb-runc.uh7i4F.mount: Deactivated successfully. Dec 13 07:38:55.590382 kubelet[2265]: I1213 07:38:55.590331 2265 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-jgsz2" podStartSLOduration=41.048614876 podStartE2EDuration="49.590220257s" podCreationTimestamp="2024-12-13 07:38:06 +0000 UTC" firstStartedPulling="2024-12-13 07:38:45.867251581 +0000 UTC m=+61.144545915" lastFinishedPulling="2024-12-13 07:38:54.408856958 +0000 UTC m=+69.686151296" observedRunningTime="2024-12-13 07:38:55.588419319 +0000 UTC m=+70.865713677" watchObservedRunningTime="2024-12-13 07:38:55.590220257 +0000 UTC m=+70.867514598" Dec 13 07:39:01.469124 kubelet[2265]: I1213 07:39:01.469046 2265 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 07:39:01.530000 audit[4824]: NETFILTER_CFG table=filter:112 family=2 entries=8 op=nft_register_rule pid=4824 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 07:39:01.530000 audit[4824]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffc33c6aa20 a2=0 a3=7ffc33c6aa0c items=0 ppid=2405 pid=4824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:39:01.546066 kernel: audit: type=1325 audit(1734075541.530:408): table=filter:112 family=2 entries=8 op=nft_register_rule pid=4824 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 07:39:01.546312 kernel: audit: type=1300 audit(1734075541.530:408): arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffc33c6aa20 a2=0 a3=7ffc33c6aa0c items=0 ppid=2405 pid=4824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:39:01.546418 kernel: audit: type=1327 audit(1734075541.530:408): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 07:39:01.530000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 07:39:01.552000 audit[4824]: NETFILTER_CFG table=nat:113 family=2 entries=34 op=nft_register_chain pid=4824 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 07:39:01.552000 audit[4824]: SYSCALL arch=c000003e syscall=46 success=yes exit=11236 a0=3 a1=7ffc33c6aa20 a2=0 a3=7ffc33c6aa0c items=0 ppid=2405 pid=4824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:39:01.564699 kernel: audit: type=1325 audit(1734075541.552:409): table=nat:113 family=2 entries=34 op=nft_register_chain pid=4824 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 07:39:01.565087 kernel: audit: type=1300 audit(1734075541.552:409): arch=c000003e syscall=46 success=yes exit=11236 a0=3 a1=7ffc33c6aa20 a2=0 a3=7ffc33c6aa0c items=0 ppid=2405 pid=4824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:39:01.565165 kernel: audit: type=1327 audit(1734075541.552:409): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 07:39:01.552000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 07:39:04.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.230.78.246:22-139.178.89.65:38724 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:39:04.209523 systemd[1]: Started sshd@9-10.230.78.246:22-139.178.89.65:38724.service. Dec 13 07:39:04.222355 kernel: audit: type=1130 audit(1734075544.209:410): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.230.78.246:22-139.178.89.65:38724 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:39:05.187000 audit[4828]: USER_ACCT pid=4828 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:05.191958 sshd[4828]: Accepted publickey for core from 139.178.89.65 port 38724 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 07:39:05.196910 kernel: audit: type=1101 audit(1734075545.187:411): pid=4828 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:05.197000 audit[4828]: CRED_ACQ pid=4828 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:05.202800 sshd[4828]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 07:39:05.207917 kernel: audit: type=1103 audit(1734075545.197:412): pid=4828 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:05.224282 kernel: audit: type=1006 audit(1734075545.198:413): pid=4828 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Dec 13 07:39:05.198000 audit[4828]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff4f7c0320 a2=3 a3=0 items=0 ppid=1 pid=4828 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:39:05.198000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 07:39:05.239991 systemd-logind[1301]: New session 10 of user core. Dec 13 07:39:05.241585 systemd[1]: Started session-10.scope. Dec 13 07:39:05.263000 audit[4828]: USER_START pid=4828 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:05.266000 audit[4831]: CRED_ACQ pid=4831 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:06.523778 sshd[4828]: pam_unix(sshd:session): session closed for user core Dec 13 07:39:06.526000 audit[4828]: USER_END pid=4828 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:06.526000 audit[4828]: CRED_DISP pid=4828 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:06.529432 systemd[1]: sshd@9-10.230.78.246:22-139.178.89.65:38724.service: Deactivated successfully. Dec 13 07:39:06.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.230.78.246:22-139.178.89.65:38724 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:39:06.531425 systemd-logind[1301]: Session 10 logged out. Waiting for processes to exit. Dec 13 07:39:06.532776 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 07:39:06.534875 systemd-logind[1301]: Removed session 10. Dec 13 07:39:11.546986 systemd[1]: run-containerd-runc-k8s.io-f686aa7caff47cf7895fa8aec1a3132bcab42b03efa7a9dbef0ec320ee2de6cd-runc.GCSQHm.mount: Deactivated successfully. Dec 13 07:39:11.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.230.78.246:22-139.178.89.65:42474 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:39:11.674645 systemd[1]: Started sshd@10-10.230.78.246:22-139.178.89.65:42474.service. Dec 13 07:39:11.676573 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 13 07:39:11.676690 kernel: audit: type=1130 audit(1734075551.674:419): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.230.78.246:22-139.178.89.65:42474 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:39:12.594150 sshd[4864]: Accepted publickey for core from 139.178.89.65 port 42474 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 07:39:12.593000 audit[4864]: USER_ACCT pid=4864 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:12.605963 kernel: audit: type=1101 audit(1734075552.593:420): pid=4864 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:12.604000 audit[4864]: CRED_ACQ pid=4864 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:12.613909 kernel: audit: type=1103 audit(1734075552.604:421): pid=4864 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:12.614352 sshd[4864]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 07:39:12.618959 kernel: audit: type=1006 audit(1734075552.604:422): pid=4864 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Dec 13 07:39:12.604000 audit[4864]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffd87c5a20 a2=3 a3=0 items=0 ppid=1 pid=4864 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:39:12.631925 kernel: audit: type=1300 audit(1734075552.604:422): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffd87c5a20 a2=3 a3=0 items=0 ppid=1 pid=4864 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:39:12.604000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 07:39:12.643381 kernel: audit: type=1327 audit(1734075552.604:422): proctitle=737368643A20636F7265205B707269765D Dec 13 07:39:12.643681 systemd[1]: Started session-11.scope. Dec 13 07:39:12.645570 systemd-logind[1301]: New session 11 of user core. Dec 13 07:39:12.666096 kernel: audit: type=1105 audit(1734075552.655:423): pid=4864 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:12.655000 audit[4864]: USER_START pid=4864 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:12.686042 kernel: audit: type=1103 audit(1734075552.678:424): pid=4869 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:12.678000 audit[4869]: CRED_ACQ pid=4869 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:13.391128 sshd[4864]: pam_unix(sshd:session): session closed for user core Dec 13 07:39:13.392000 audit[4864]: USER_END pid=4864 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:13.406278 kernel: audit: type=1106 audit(1734075553.392:425): pid=4864 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:13.406414 kernel: audit: type=1104 audit(1734075553.392:426): pid=4864 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:13.392000 audit[4864]: CRED_DISP pid=4864 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:13.404181 systemd[1]: sshd@10-10.230.78.246:22-139.178.89.65:42474.service: Deactivated successfully. Dec 13 07:39:13.405727 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 07:39:13.423676 systemd-logind[1301]: Session 11 logged out. Waiting for processes to exit. Dec 13 07:39:13.403000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.230.78.246:22-139.178.89.65:42474 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:39:13.425671 systemd-logind[1301]: Removed session 11. Dec 13 07:39:18.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.230.78.246:22-139.178.89.65:38752 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:39:18.537751 systemd[1]: Started sshd@11-10.230.78.246:22-139.178.89.65:38752.service. Dec 13 07:39:18.543155 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 07:39:18.543279 kernel: audit: type=1130 audit(1734075558.537:428): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.230.78.246:22-139.178.89.65:38752 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:39:19.462072 sshd[4879]: Accepted publickey for core from 139.178.89.65 port 38752 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 07:39:19.460000 audit[4879]: USER_ACCT pid=4879 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:19.474943 kernel: audit: type=1101 audit(1734075559.460:429): pid=4879 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:19.478556 sshd[4879]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 07:39:19.474000 audit[4879]: CRED_ACQ pid=4879 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:19.486054 kernel: audit: type=1103 audit(1734075559.474:430): pid=4879 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:19.474000 audit[4879]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc4a91e1d0 a2=3 a3=0 items=0 ppid=1 pid=4879 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:39:19.497984 kernel: audit: type=1006 audit(1734075559.474:431): pid=4879 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Dec 13 07:39:19.498471 kernel: audit: type=1300 audit(1734075559.474:431): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc4a91e1d0 a2=3 a3=0 items=0 ppid=1 pid=4879 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:39:19.499679 kernel: audit: type=1327 audit(1734075559.474:431): proctitle=737368643A20636F7265205B707269765D Dec 13 07:39:19.474000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 07:39:19.508369 systemd-logind[1301]: New session 12 of user core. Dec 13 07:39:19.510948 systemd[1]: Started session-12.scope. Dec 13 07:39:19.521000 audit[4879]: USER_START pid=4879 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:19.530956 kernel: audit: type=1105 audit(1734075559.521:432): pid=4879 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:19.531000 audit[4888]: CRED_ACQ pid=4888 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:19.538947 kernel: audit: type=1103 audit(1734075559.531:433): pid=4888 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:20.239576 sshd[4879]: pam_unix(sshd:session): session closed for user core Dec 13 07:39:20.241000 audit[4879]: USER_END pid=4879 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:20.255932 kernel: audit: type=1106 audit(1734075560.241:434): pid=4879 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:20.256120 kernel: audit: type=1104 audit(1734075560.252:435): pid=4879 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:20.252000 audit[4879]: CRED_DISP pid=4879 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:20.259000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.230.78.246:22-139.178.89.65:38752 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:39:20.259642 systemd[1]: sshd@11-10.230.78.246:22-139.178.89.65:38752.service: Deactivated successfully. Dec 13 07:39:20.262467 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 07:39:20.262468 systemd-logind[1301]: Session 12 logged out. Waiting for processes to exit. Dec 13 07:39:20.264693 systemd-logind[1301]: Removed session 12. Dec 13 07:39:20.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.230.78.246:22-139.178.89.65:38754 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:39:20.384508 systemd[1]: Started sshd@12-10.230.78.246:22-139.178.89.65:38754.service. Dec 13 07:39:20.697791 systemd[1]: run-containerd-runc-k8s.io-cca6b830759fb782ef48ee50ead9e2ca1809a7438bf018251a99625cfa9b7924-runc.5KkUFa.mount: Deactivated successfully. Dec 13 07:39:21.290000 audit[4899]: USER_ACCT pid=4899 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:21.292954 sshd[4899]: Accepted publickey for core from 139.178.89.65 port 38754 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 07:39:21.292000 audit[4899]: CRED_ACQ pid=4899 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:21.293000 audit[4899]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeeabcbca0 a2=3 a3=0 items=0 ppid=1 pid=4899 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:39:21.293000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 07:39:21.303552 sshd[4899]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 07:39:21.311624 systemd-logind[1301]: New session 13 of user core. Dec 13 07:39:21.311798 systemd[1]: Started session-13.scope. Dec 13 07:39:21.323000 audit[4899]: USER_START pid=4899 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:21.327000 audit[4921]: CRED_ACQ pid=4921 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:22.145089 sshd[4899]: pam_unix(sshd:session): session closed for user core Dec 13 07:39:22.148000 audit[4899]: USER_END pid=4899 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:22.148000 audit[4899]: CRED_DISP pid=4899 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:22.151368 systemd[1]: sshd@12-10.230.78.246:22-139.178.89.65:38754.service: Deactivated successfully. Dec 13 07:39:22.152614 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 07:39:22.150000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.230.78.246:22-139.178.89.65:38754 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:39:22.155539 systemd-logind[1301]: Session 13 logged out. Waiting for processes to exit. Dec 13 07:39:22.156844 systemd-logind[1301]: Removed session 13. Dec 13 07:39:22.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.230.78.246:22-139.178.89.65:38768 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:39:22.288860 systemd[1]: Started sshd@13-10.230.78.246:22-139.178.89.65:38768.service. Dec 13 07:39:23.197000 audit[4929]: USER_ACCT pid=4929 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:23.198753 sshd[4929]: Accepted publickey for core from 139.178.89.65 port 38768 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 07:39:23.199000 audit[4929]: CRED_ACQ pid=4929 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:23.199000 audit[4929]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdf4f5c030 a2=3 a3=0 items=0 ppid=1 pid=4929 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:39:23.199000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 07:39:23.202037 sshd[4929]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 07:39:23.210989 systemd[1]: Started session-14.scope. Dec 13 07:39:23.212337 systemd-logind[1301]: New session 14 of user core. Dec 13 07:39:23.223000 audit[4929]: USER_START pid=4929 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:23.226000 audit[4932]: CRED_ACQ pid=4932 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:23.957076 sshd[4929]: pam_unix(sshd:session): session closed for user core Dec 13 07:39:23.958000 audit[4929]: USER_END pid=4929 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:23.965570 systemd[1]: sshd@13-10.230.78.246:22-139.178.89.65:38768.service: Deactivated successfully. Dec 13 07:39:23.967392 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 07:39:23.970172 systemd-logind[1301]: Session 14 logged out. Waiting for processes to exit. Dec 13 07:39:23.972458 kernel: kauditd_printk_skb: 20 callbacks suppressed Dec 13 07:39:23.972592 kernel: audit: type=1106 audit(1734075563.958:452): pid=4929 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:23.972033 systemd-logind[1301]: Removed session 14. Dec 13 07:39:23.959000 audit[4929]: CRED_DISP pid=4929 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:23.978913 kernel: audit: type=1104 audit(1734075563.959:453): pid=4929 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:23.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.230.78.246:22-139.178.89.65:38768 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:39:23.985631 kernel: audit: type=1131 audit(1734075563.959:454): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.230.78.246:22-139.178.89.65:38768 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:39:29.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.230.78.246:22-139.178.89.65:40952 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:39:29.104294 systemd[1]: Started sshd@14-10.230.78.246:22-139.178.89.65:40952.service. Dec 13 07:39:29.114804 kernel: audit: type=1130 audit(1734075569.103:455): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.230.78.246:22-139.178.89.65:40952 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:39:30.057000 audit[4951]: USER_ACCT pid=4951 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:30.063303 sshd[4951]: Accepted publickey for core from 139.178.89.65 port 40952 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 07:39:30.065021 kernel: audit: type=1101 audit(1734075570.057:456): pid=4951 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:30.065000 audit[4951]: CRED_ACQ pid=4951 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:30.073970 sshd[4951]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 07:39:30.077139 kernel: audit: type=1103 audit(1734075570.065:457): pid=4951 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:30.077208 kernel: audit: type=1006 audit(1734075570.072:458): pid=4951 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Dec 13 07:39:30.072000 audit[4951]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc610d6c70 a2=3 a3=0 items=0 ppid=1 pid=4951 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:39:30.084018 kernel: audit: type=1300 audit(1734075570.072:458): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc610d6c70 a2=3 a3=0 items=0 ppid=1 pid=4951 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:39:30.072000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 07:39:30.092229 systemd[1]: Started session-15.scope. Dec 13 07:39:30.093025 kernel: audit: type=1327 audit(1734075570.072:458): proctitle=737368643A20636F7265205B707269765D Dec 13 07:39:30.093468 systemd-logind[1301]: New session 15 of user core. Dec 13 07:39:30.108000 audit[4951]: USER_START pid=4951 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:30.119968 kernel: audit: type=1105 audit(1734075570.108:459): pid=4951 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:30.120119 kernel: audit: type=1103 audit(1734075570.108:460): pid=4954 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:30.108000 audit[4954]: CRED_ACQ pid=4954 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:30.799344 sshd[4951]: pam_unix(sshd:session): session closed for user core Dec 13 07:39:30.811996 kernel: audit: type=1106 audit(1734075570.800:461): pid=4951 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:30.800000 audit[4951]: USER_END pid=4951 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:30.805254 systemd[1]: sshd@14-10.230.78.246:22-139.178.89.65:40952.service: Deactivated successfully. Dec 13 07:39:30.806878 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 07:39:30.812915 systemd-logind[1301]: Session 15 logged out. Waiting for processes to exit. Dec 13 07:39:30.800000 audit[4951]: CRED_DISP pid=4951 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:30.819944 systemd-logind[1301]: Removed session 15. Dec 13 07:39:30.804000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.230.78.246:22-139.178.89.65:40952 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:39:30.821652 kernel: audit: type=1104 audit(1734075570.800:462): pid=4951 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:35.944629 systemd[1]: Started sshd@15-10.230.78.246:22-139.178.89.65:40954.service. Dec 13 07:39:35.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.230.78.246:22-139.178.89.65:40954 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:39:35.949907 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 07:39:35.950009 kernel: audit: type=1130 audit(1734075575.944:464): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.230.78.246:22-139.178.89.65:40954 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:39:36.871000 audit[4983]: USER_ACCT pid=4983 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:36.874987 sshd[4983]: Accepted publickey for core from 139.178.89.65 port 40954 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 07:39:36.882219 kernel: audit: type=1101 audit(1734075576.871:465): pid=4983 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:36.883000 audit[4983]: CRED_ACQ pid=4983 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:36.885774 sshd[4983]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 07:39:36.894551 kernel: audit: type=1103 audit(1734075576.883:466): pid=4983 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:36.894755 kernel: audit: type=1006 audit(1734075576.883:467): pid=4983 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Dec 13 07:39:36.883000 audit[4983]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff5db94700 a2=3 a3=0 items=0 ppid=1 pid=4983 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:39:36.883000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 07:39:36.904468 kernel: audit: type=1300 audit(1734075576.883:467): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff5db94700 a2=3 a3=0 items=0 ppid=1 pid=4983 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:39:36.905993 kernel: audit: type=1327 audit(1734075576.883:467): proctitle=737368643A20636F7265205B707269765D Dec 13 07:39:36.913438 systemd[1]: Started session-16.scope. Dec 13 07:39:36.914074 systemd-logind[1301]: New session 16 of user core. Dec 13 07:39:36.927000 audit[4983]: USER_START pid=4983 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:36.936934 kernel: audit: type=1105 audit(1734075576.927:468): pid=4983 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:36.931000 audit[4986]: CRED_ACQ pid=4986 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:36.946059 kernel: audit: type=1103 audit(1734075576.931:469): pid=4986 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:37.629487 sshd[4983]: pam_unix(sshd:session): session closed for user core Dec 13 07:39:37.630000 audit[4983]: USER_END pid=4983 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:37.640101 systemd-logind[1301]: Session 16 logged out. Waiting for processes to exit. Dec 13 07:39:37.635000 audit[4983]: CRED_DISP pid=4983 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:37.642546 systemd[1]: sshd@15-10.230.78.246:22-139.178.89.65:40954.service: Deactivated successfully. Dec 13 07:39:37.644189 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 07:39:37.645463 systemd-logind[1301]: Removed session 16. Dec 13 07:39:37.648398 kernel: audit: type=1106 audit(1734075577.630:470): pid=4983 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:37.648527 kernel: audit: type=1104 audit(1734075577.635:471): pid=4983 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:37.642000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.230.78.246:22-139.178.89.65:40954 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:39:41.567462 systemd[1]: run-containerd-runc-k8s.io-f686aa7caff47cf7895fa8aec1a3132bcab42b03efa7a9dbef0ec320ee2de6cd-runc.wozHEM.mount: Deactivated successfully. Dec 13 07:39:42.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.230.78.246:22-139.178.89.65:36000 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:39:42.779999 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 07:39:42.780141 kernel: audit: type=1130 audit(1734075582.772:473): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.230.78.246:22-139.178.89.65:36000 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:39:42.773919 systemd[1]: Started sshd@16-10.230.78.246:22-139.178.89.65:36000.service. Dec 13 07:39:43.681000 audit[5018]: USER_ACCT pid=5018 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:43.687831 sshd[5018]: Accepted publickey for core from 139.178.89.65 port 36000 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 07:39:43.694073 kernel: audit: type=1101 audit(1734075583.681:474): pid=5018 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:43.693000 audit[5018]: CRED_ACQ pid=5018 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:43.702899 sshd[5018]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 07:39:43.705805 kernel: audit: type=1103 audit(1734075583.693:475): pid=5018 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:43.705939 kernel: audit: type=1006 audit(1734075583.693:476): pid=5018 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Dec 13 07:39:43.693000 audit[5018]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff53ef0c00 a2=3 a3=0 items=0 ppid=1 pid=5018 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:39:43.712934 kernel: audit: type=1300 audit(1734075583.693:476): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff53ef0c00 a2=3 a3=0 items=0 ppid=1 pid=5018 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:39:43.693000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 07:39:43.715920 kernel: audit: type=1327 audit(1734075583.693:476): proctitle=737368643A20636F7265205B707269765D Dec 13 07:39:43.722306 systemd-logind[1301]: New session 17 of user core. Dec 13 07:39:43.724269 systemd[1]: Started session-17.scope. Dec 13 07:39:43.733000 audit[5018]: USER_START pid=5018 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:43.733000 audit[5021]: CRED_ACQ pid=5021 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:43.748270 kernel: audit: type=1105 audit(1734075583.733:477): pid=5018 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:43.748459 kernel: audit: type=1103 audit(1734075583.733:478): pid=5021 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:44.429038 sshd[5018]: pam_unix(sshd:session): session closed for user core Dec 13 07:39:44.429000 audit[5018]: USER_END pid=5018 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:44.442195 kernel: audit: type=1106 audit(1734075584.429:479): pid=5018 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:44.437250 systemd-logind[1301]: Session 17 logged out. Waiting for processes to exit. Dec 13 07:39:44.448921 kernel: audit: type=1104 audit(1734075584.433:480): pid=5018 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:44.433000 audit[5018]: CRED_DISP pid=5018 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:44.439255 systemd[1]: sshd@16-10.230.78.246:22-139.178.89.65:36000.service: Deactivated successfully. Dec 13 07:39:44.440870 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 07:39:44.443022 systemd-logind[1301]: Removed session 17. Dec 13 07:39:44.437000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.230.78.246:22-139.178.89.65:36000 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:39:46.201803 env[1313]: time="2024-12-13T07:39:46.201697639Z" level=info msg="StopPodSandbox for \"6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6\"" Dec 13 07:39:46.714377 env[1313]: 2024-12-13 07:39:46.501 [WARNING][5044] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ktue8.gb1.brightbox.com-k8s-calico--apiserver--8888c77cc--8shn8-eth0", GenerateName:"calico-apiserver-8888c77cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"acbf9fc3-226f-42b6-886d-99d3e542c4e9", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 7, 38, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8888c77cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ktue8.gb1.brightbox.com", ContainerID:"44c1638d51e18865347f45a34898277e5096b397b9a07328b5121a83ed758b84", Pod:"calico-apiserver-8888c77cc-8shn8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali779acc4d7b9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 07:39:46.714377 env[1313]: 2024-12-13 07:39:46.506 [INFO][5044] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6" Dec 13 07:39:46.714377 env[1313]: 2024-12-13 07:39:46.506 [INFO][5044] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6" iface="eth0" netns="" Dec 13 07:39:46.714377 env[1313]: 2024-12-13 07:39:46.506 [INFO][5044] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6" Dec 13 07:39:46.714377 env[1313]: 2024-12-13 07:39:46.506 [INFO][5044] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6" Dec 13 07:39:46.714377 env[1313]: 2024-12-13 07:39:46.688 [INFO][5050] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6" HandleID="k8s-pod-network.6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6" Workload="srv--ktue8.gb1.brightbox.com-k8s-calico--apiserver--8888c77cc--8shn8-eth0" Dec 13 07:39:46.714377 env[1313]: 2024-12-13 07:39:46.690 [INFO][5050] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 07:39:46.714377 env[1313]: 2024-12-13 07:39:46.690 [INFO][5050] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 07:39:46.714377 env[1313]: 2024-12-13 07:39:46.707 [WARNING][5050] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6" HandleID="k8s-pod-network.6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6" Workload="srv--ktue8.gb1.brightbox.com-k8s-calico--apiserver--8888c77cc--8shn8-eth0" Dec 13 07:39:46.714377 env[1313]: 2024-12-13 07:39:46.707 [INFO][5050] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6" HandleID="k8s-pod-network.6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6" Workload="srv--ktue8.gb1.brightbox.com-k8s-calico--apiserver--8888c77cc--8shn8-eth0" Dec 13 07:39:46.714377 env[1313]: 2024-12-13 07:39:46.709 [INFO][5050] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 07:39:46.714377 env[1313]: 2024-12-13 07:39:46.711 [INFO][5044] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6" Dec 13 07:39:46.719947 env[1313]: time="2024-12-13T07:39:46.714828675Z" level=info msg="TearDown network for sandbox \"6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6\" successfully" Dec 13 07:39:46.719947 env[1313]: time="2024-12-13T07:39:46.715326540Z" level=info msg="StopPodSandbox for \"6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6\" returns successfully" Dec 13 07:39:46.719947 env[1313]: time="2024-12-13T07:39:46.719579163Z" level=info msg="RemovePodSandbox for \"6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6\"" Dec 13 07:39:46.719947 env[1313]: time="2024-12-13T07:39:46.719659266Z" level=info msg="Forcibly stopping sandbox \"6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6\"" Dec 13 07:39:46.848557 env[1313]: 2024-12-13 07:39:46.773 [WARNING][5068] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ktue8.gb1.brightbox.com-k8s-calico--apiserver--8888c77cc--8shn8-eth0", GenerateName:"calico-apiserver-8888c77cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"acbf9fc3-226f-42b6-886d-99d3e542c4e9", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 7, 38, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"8888c77cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ktue8.gb1.brightbox.com", ContainerID:"44c1638d51e18865347f45a34898277e5096b397b9a07328b5121a83ed758b84", Pod:"calico-apiserver-8888c77cc-8shn8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali779acc4d7b9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 07:39:46.848557 env[1313]: 2024-12-13 07:39:46.774 [INFO][5068] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6" Dec 13 07:39:46.848557 env[1313]: 2024-12-13 07:39:46.774 [INFO][5068] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6" iface="eth0" netns="" Dec 13 07:39:46.848557 env[1313]: 2024-12-13 07:39:46.774 [INFO][5068] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6" Dec 13 07:39:46.848557 env[1313]: 2024-12-13 07:39:46.774 [INFO][5068] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6" Dec 13 07:39:46.848557 env[1313]: 2024-12-13 07:39:46.819 [INFO][5074] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6" HandleID="k8s-pod-network.6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6" Workload="srv--ktue8.gb1.brightbox.com-k8s-calico--apiserver--8888c77cc--8shn8-eth0" Dec 13 07:39:46.848557 env[1313]: 2024-12-13 07:39:46.820 [INFO][5074] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 07:39:46.848557 env[1313]: 2024-12-13 07:39:46.820 [INFO][5074] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 07:39:46.848557 env[1313]: 2024-12-13 07:39:46.838 [WARNING][5074] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6" HandleID="k8s-pod-network.6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6" Workload="srv--ktue8.gb1.brightbox.com-k8s-calico--apiserver--8888c77cc--8shn8-eth0" Dec 13 07:39:46.848557 env[1313]: 2024-12-13 07:39:46.838 [INFO][5074] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6" HandleID="k8s-pod-network.6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6" Workload="srv--ktue8.gb1.brightbox.com-k8s-calico--apiserver--8888c77cc--8shn8-eth0" Dec 13 07:39:46.848557 env[1313]: 2024-12-13 07:39:46.842 [INFO][5074] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 07:39:46.848557 env[1313]: 2024-12-13 07:39:46.845 [INFO][5068] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6" Dec 13 07:39:46.849684 env[1313]: time="2024-12-13T07:39:46.848602011Z" level=info msg="TearDown network for sandbox \"6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6\" successfully" Dec 13 07:39:46.854114 env[1313]: time="2024-12-13T07:39:46.854076982Z" level=info msg="RemovePodSandbox \"6a3129ad4f96605a222c71853ea6c89050b2f37fef797bd8e29cf3d32df246c6\" returns successfully" Dec 13 07:39:46.855010 env[1313]: time="2024-12-13T07:39:46.854972455Z" level=info msg="StopPodSandbox for \"0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28\"" Dec 13 07:39:47.009417 env[1313]: 2024-12-13 07:39:46.933 [WARNING][5093] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ktue8.gb1.brightbox.com-k8s-calico--kube--controllers--7ccc67bff4--xdz9n-eth0", GenerateName:"calico-kube-controllers-7ccc67bff4-", Namespace:"calico-system", SelfLink:"", UID:"1bec6ab5-1944-4370-b6de-fe55870be674", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 7, 38, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7ccc67bff4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ktue8.gb1.brightbox.com", ContainerID:"c2acb483ec5c8f705787454e04fdd290cb12a78742f4306de4015389bf5e5681", Pod:"calico-kube-controllers-7ccc67bff4-xdz9n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.109.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9b47b3ae4cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 07:39:47.009417 env[1313]: 2024-12-13 07:39:46.934 [INFO][5093] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28" Dec 13 07:39:47.009417 env[1313]: 2024-12-13 07:39:46.934 [INFO][5093] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28" iface="eth0" netns="" Dec 13 07:39:47.009417 env[1313]: 2024-12-13 07:39:46.934 [INFO][5093] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28" Dec 13 07:39:47.009417 env[1313]: 2024-12-13 07:39:46.934 [INFO][5093] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28" Dec 13 07:39:47.009417 env[1313]: 2024-12-13 07:39:46.989 [INFO][5099] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28" HandleID="k8s-pod-network.0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28" Workload="srv--ktue8.gb1.brightbox.com-k8s-calico--kube--controllers--7ccc67bff4--xdz9n-eth0" Dec 13 07:39:47.009417 env[1313]: 2024-12-13 07:39:46.989 [INFO][5099] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 07:39:47.009417 env[1313]: 2024-12-13 07:39:46.989 [INFO][5099] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 07:39:47.009417 env[1313]: 2024-12-13 07:39:47.002 [WARNING][5099] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28" HandleID="k8s-pod-network.0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28" Workload="srv--ktue8.gb1.brightbox.com-k8s-calico--kube--controllers--7ccc67bff4--xdz9n-eth0" Dec 13 07:39:47.009417 env[1313]: 2024-12-13 07:39:47.002 [INFO][5099] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28" HandleID="k8s-pod-network.0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28" Workload="srv--ktue8.gb1.brightbox.com-k8s-calico--kube--controllers--7ccc67bff4--xdz9n-eth0" Dec 13 07:39:47.009417 env[1313]: 2024-12-13 07:39:47.004 [INFO][5099] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 07:39:47.009417 env[1313]: 2024-12-13 07:39:47.006 [INFO][5093] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28" Dec 13 07:39:47.009417 env[1313]: time="2024-12-13T07:39:47.008606627Z" level=info msg="TearDown network for sandbox \"0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28\" successfully" Dec 13 07:39:47.009417 env[1313]: time="2024-12-13T07:39:47.008661593Z" level=info msg="StopPodSandbox for \"0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28\" returns successfully" Dec 13 07:39:47.010582 env[1313]: time="2024-12-13T07:39:47.009853387Z" level=info msg="RemovePodSandbox for \"0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28\"" Dec 13 07:39:47.010582 env[1313]: time="2024-12-13T07:39:47.009974137Z" level=info msg="Forcibly stopping sandbox \"0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28\"" Dec 13 07:39:47.201353 env[1313]: 2024-12-13 07:39:47.093 [WARNING][5119] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ktue8.gb1.brightbox.com-k8s-calico--kube--controllers--7ccc67bff4--xdz9n-eth0", GenerateName:"calico-kube-controllers-7ccc67bff4-", Namespace:"calico-system", SelfLink:"", UID:"1bec6ab5-1944-4370-b6de-fe55870be674", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 7, 38, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7ccc67bff4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ktue8.gb1.brightbox.com", ContainerID:"c2acb483ec5c8f705787454e04fdd290cb12a78742f4306de4015389bf5e5681", Pod:"calico-kube-controllers-7ccc67bff4-xdz9n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.109.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9b47b3ae4cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 07:39:47.201353 env[1313]: 2024-12-13 07:39:47.094 [INFO][5119] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28" Dec 13 07:39:47.201353 env[1313]: 2024-12-13 07:39:47.094 [INFO][5119] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28" iface="eth0" netns="" Dec 13 07:39:47.201353 env[1313]: 2024-12-13 07:39:47.094 [INFO][5119] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28" Dec 13 07:39:47.201353 env[1313]: 2024-12-13 07:39:47.094 [INFO][5119] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28" Dec 13 07:39:47.201353 env[1313]: 2024-12-13 07:39:47.186 [INFO][5125] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28" HandleID="k8s-pod-network.0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28" Workload="srv--ktue8.gb1.brightbox.com-k8s-calico--kube--controllers--7ccc67bff4--xdz9n-eth0" Dec 13 07:39:47.201353 env[1313]: 2024-12-13 07:39:47.186 [INFO][5125] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 07:39:47.201353 env[1313]: 2024-12-13 07:39:47.187 [INFO][5125] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 07:39:47.201353 env[1313]: 2024-12-13 07:39:47.194 [WARNING][5125] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28" HandleID="k8s-pod-network.0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28" Workload="srv--ktue8.gb1.brightbox.com-k8s-calico--kube--controllers--7ccc67bff4--xdz9n-eth0" Dec 13 07:39:47.201353 env[1313]: 2024-12-13 07:39:47.195 [INFO][5125] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28" HandleID="k8s-pod-network.0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28" Workload="srv--ktue8.gb1.brightbox.com-k8s-calico--kube--controllers--7ccc67bff4--xdz9n-eth0" Dec 13 07:39:47.201353 env[1313]: 2024-12-13 07:39:47.197 [INFO][5125] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 07:39:47.201353 env[1313]: 2024-12-13 07:39:47.199 [INFO][5119] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28" Dec 13 07:39:47.203294 env[1313]: time="2024-12-13T07:39:47.201386140Z" level=info msg="TearDown network for sandbox \"0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28\" successfully" Dec 13 07:39:47.207258 env[1313]: time="2024-12-13T07:39:47.207200411Z" level=info msg="RemovePodSandbox \"0809da9a4fa93847651a5a41145f55f908d72a0c24feb54b36db67f41b098c28\" returns successfully" Dec 13 07:39:47.209952 env[1313]: time="2024-12-13T07:39:47.209876788Z" level=info msg="StopPodSandbox for \"f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e\"" Dec 13 07:39:47.430301 env[1313]: 2024-12-13 07:39:47.318 [WARNING][5144] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ktue8.gb1.brightbox.com-k8s-csi--node--driver--jgsz2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6755c6bd-417c-469c-9e0b-b65078e35af8", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 7, 38, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ktue8.gb1.brightbox.com", ContainerID:"0df636dae344507ab25598249209d9d823ca43b9c21d0d2fb1703588fa501747", Pod:"csi-node-driver-jgsz2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.109.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali65101985d45", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 07:39:47.430301 env[1313]: 2024-12-13 07:39:47.320 [INFO][5144] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e" Dec 13 07:39:47.430301 env[1313]: 2024-12-13 07:39:47.321 [INFO][5144] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e" iface="eth0" netns="" Dec 13 07:39:47.430301 env[1313]: 2024-12-13 07:39:47.321 [INFO][5144] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e" Dec 13 07:39:47.430301 env[1313]: 2024-12-13 07:39:47.321 [INFO][5144] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e" Dec 13 07:39:47.430301 env[1313]: 2024-12-13 07:39:47.402 [INFO][5150] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e" HandleID="k8s-pod-network.f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e" Workload="srv--ktue8.gb1.brightbox.com-k8s-csi--node--driver--jgsz2-eth0" Dec 13 07:39:47.430301 env[1313]: 2024-12-13 07:39:47.405 [INFO][5150] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 07:39:47.430301 env[1313]: 2024-12-13 07:39:47.405 [INFO][5150] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 07:39:47.430301 env[1313]: 2024-12-13 07:39:47.417 [WARNING][5150] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e" HandleID="k8s-pod-network.f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e" Workload="srv--ktue8.gb1.brightbox.com-k8s-csi--node--driver--jgsz2-eth0" Dec 13 07:39:47.430301 env[1313]: 2024-12-13 07:39:47.417 [INFO][5150] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e" HandleID="k8s-pod-network.f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e" Workload="srv--ktue8.gb1.brightbox.com-k8s-csi--node--driver--jgsz2-eth0" Dec 13 07:39:47.430301 env[1313]: 2024-12-13 07:39:47.422 [INFO][5150] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 07:39:47.430301 env[1313]: 2024-12-13 07:39:47.425 [INFO][5144] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e" Dec 13 07:39:47.431524 env[1313]: time="2024-12-13T07:39:47.431466419Z" level=info msg="TearDown network for sandbox \"f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e\" successfully" Dec 13 07:39:47.431699 env[1313]: time="2024-12-13T07:39:47.431665290Z" level=info msg="StopPodSandbox for \"f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e\" returns successfully" Dec 13 07:39:47.435414 env[1313]: time="2024-12-13T07:39:47.435372336Z" level=info msg="RemovePodSandbox for \"f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e\"" Dec 13 07:39:47.435773 env[1313]: time="2024-12-13T07:39:47.435707093Z" level=info msg="Forcibly stopping sandbox \"f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e\"" Dec 13 07:39:47.589336 env[1313]: 2024-12-13 07:39:47.516 [WARNING][5168] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--ktue8.gb1.brightbox.com-k8s-csi--node--driver--jgsz2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6755c6bd-417c-469c-9e0b-b65078e35af8", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 7, 38, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-ktue8.gb1.brightbox.com", ContainerID:"0df636dae344507ab25598249209d9d823ca43b9c21d0d2fb1703588fa501747", Pod:"csi-node-driver-jgsz2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.109.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali65101985d45", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 07:39:47.589336 env[1313]: 2024-12-13 07:39:47.516 [INFO][5168] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e" Dec 13 07:39:47.589336 env[1313]: 2024-12-13 07:39:47.517 [INFO][5168] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e" iface="eth0" netns="" Dec 13 07:39:47.589336 env[1313]: 2024-12-13 07:39:47.517 [INFO][5168] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e" Dec 13 07:39:47.589336 env[1313]: 2024-12-13 07:39:47.517 [INFO][5168] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e" Dec 13 07:39:47.589336 env[1313]: 2024-12-13 07:39:47.574 [INFO][5174] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e" HandleID="k8s-pod-network.f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e" Workload="srv--ktue8.gb1.brightbox.com-k8s-csi--node--driver--jgsz2-eth0" Dec 13 07:39:47.589336 env[1313]: 2024-12-13 07:39:47.574 [INFO][5174] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 07:39:47.589336 env[1313]: 2024-12-13 07:39:47.574 [INFO][5174] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 07:39:47.589336 env[1313]: 2024-12-13 07:39:47.582 [WARNING][5174] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e" HandleID="k8s-pod-network.f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e" Workload="srv--ktue8.gb1.brightbox.com-k8s-csi--node--driver--jgsz2-eth0" Dec 13 07:39:47.589336 env[1313]: 2024-12-13 07:39:47.583 [INFO][5174] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e" HandleID="k8s-pod-network.f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e" Workload="srv--ktue8.gb1.brightbox.com-k8s-csi--node--driver--jgsz2-eth0" Dec 13 07:39:47.589336 env[1313]: 2024-12-13 07:39:47.585 [INFO][5174] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 07:39:47.589336 env[1313]: 2024-12-13 07:39:47.587 [INFO][5168] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e" Dec 13 07:39:47.591405 env[1313]: time="2024-12-13T07:39:47.591312980Z" level=info msg="TearDown network for sandbox \"f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e\" successfully" Dec 13 07:39:47.595375 env[1313]: time="2024-12-13T07:39:47.595300857Z" level=info msg="RemovePodSandbox \"f90d372d8e59276dc065fcfb61de7f2f4be40842cc590c35e3c81eeb4fb49f7e\" returns successfully" Dec 13 07:39:49.576646 systemd[1]: Started sshd@17-10.230.78.246:22-139.178.89.65:39310.service. Dec 13 07:39:49.600485 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 07:39:49.601023 kernel: audit: type=1130 audit(1734075589.577:482): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.230.78.246:22-139.178.89.65:39310 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:39:49.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.230.78.246:22-139.178.89.65:39310 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:39:50.523000 audit[5180]: USER_ACCT pid=5180 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:50.533617 sshd[5180]: Accepted publickey for core from 139.178.89.65 port 39310 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 07:39:50.534605 kernel: audit: type=1101 audit(1734075590.523:483): pid=5180 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:50.532000 audit[5180]: CRED_ACQ pid=5180 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:50.535659 sshd[5180]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 07:39:50.541933 kernel: audit: type=1103 audit(1734075590.532:484): pid=5180 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:50.549631 kernel: audit: type=1006 audit(1734075590.532:485): pid=5180 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Dec 13 07:39:50.549728 kernel: audit: type=1300 audit(1734075590.532:485): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc5917a7b0 a2=3 a3=0 items=0 ppid=1 pid=5180 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:39:50.532000 audit[5180]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc5917a7b0 a2=3 a3=0 items=0 ppid=1 pid=5180 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:39:50.532000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 07:39:50.558065 kernel: audit: type=1327 audit(1734075590.532:485): proctitle=737368643A20636F7265205B707269765D Dec 13 07:39:50.566051 systemd[1]: Started session-18.scope. Dec 13 07:39:50.566986 systemd-logind[1301]: New session 18 of user core. Dec 13 07:39:50.576000 audit[5180]: USER_START pid=5180 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:50.585591 kernel: audit: type=1105 audit(1734075590.576:486): pid=5180 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:50.585000 audit[5183]: CRED_ACQ pid=5183 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:50.592947 kernel: audit: type=1103 audit(1734075590.585:487): pid=5183 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:51.472243 sshd[5180]: pam_unix(sshd:session): session closed for user core Dec 13 07:39:51.485189 kernel: audit: type=1106 audit(1734075591.473:488): pid=5180 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:51.473000 audit[5180]: USER_END pid=5180 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:51.482148 systemd-logind[1301]: Session 18 logged out. Waiting for processes to exit. Dec 13 07:39:51.484859 systemd[1]: sshd@17-10.230.78.246:22-139.178.89.65:39310.service: Deactivated successfully. Dec 13 07:39:51.486366 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 07:39:51.488500 systemd-logind[1301]: Removed session 18. Dec 13 07:39:51.473000 audit[5180]: CRED_DISP pid=5180 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:51.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.230.78.246:22-139.178.89.65:39310 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:39:51.495927 kernel: audit: type=1104 audit(1734075591.473:489): pid=5180 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:51.619853 systemd[1]: Started sshd@18-10.230.78.246:22-139.178.89.65:39316.service. Dec 13 07:39:51.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.230.78.246:22-139.178.89.65:39316 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:39:52.510000 audit[5213]: USER_ACCT pid=5213 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:52.511507 sshd[5213]: Accepted publickey for core from 139.178.89.65 port 39316 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 07:39:52.512000 audit[5213]: CRED_ACQ pid=5213 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:52.512000 audit[5213]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc3cfe9c50 a2=3 a3=0 items=0 ppid=1 pid=5213 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:39:52.512000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 07:39:52.514199 sshd[5213]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 07:39:52.521825 systemd-logind[1301]: New session 19 of user core. Dec 13 07:39:52.522869 systemd[1]: Started session-19.scope. Dec 13 07:39:52.532000 audit[5213]: USER_START pid=5213 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:52.534000 audit[5216]: CRED_ACQ pid=5216 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:53.668419 sshd[5213]: pam_unix(sshd:session): session closed for user core Dec 13 07:39:53.670000 audit[5213]: USER_END pid=5213 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:53.670000 audit[5213]: CRED_DISP pid=5213 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:53.673132 systemd[1]: sshd@18-10.230.78.246:22-139.178.89.65:39316.service: Deactivated successfully. Dec 13 07:39:53.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.230.78.246:22-139.178.89.65:39316 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:39:53.674392 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 07:39:53.675800 systemd-logind[1301]: Session 19 logged out. Waiting for processes to exit. Dec 13 07:39:53.677195 systemd-logind[1301]: Removed session 19. Dec 13 07:39:53.814527 systemd[1]: Started sshd@19-10.230.78.246:22-139.178.89.65:39326.service. Dec 13 07:39:53.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.230.78.246:22-139.178.89.65:39326 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:39:54.712000 audit[5224]: USER_ACCT pid=5224 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:54.716203 sshd[5224]: Accepted publickey for core from 139.178.89.65 port 39326 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 07:39:54.717428 kernel: kauditd_printk_skb: 13 callbacks suppressed Dec 13 07:39:54.717546 kernel: audit: type=1101 audit(1734075594.712:501): pid=5224 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:54.724000 audit[5224]: CRED_ACQ pid=5224 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:54.725980 sshd[5224]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 07:39:54.735040 kernel: audit: type=1103 audit(1734075594.724:502): pid=5224 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:54.735329 kernel: audit: type=1006 audit(1734075594.724:503): pid=5224 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Dec 13 07:39:54.736310 kernel: audit: type=1300 audit(1734075594.724:503): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeb8ad69c0 a2=3 a3=0 items=0 ppid=1 pid=5224 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:39:54.724000 audit[5224]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeb8ad69c0 a2=3 a3=0 items=0 ppid=1 pid=5224 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:39:54.742990 kernel: audit: type=1327 audit(1734075594.724:503): proctitle=737368643A20636F7265205B707269765D Dec 13 07:39:54.724000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 07:39:54.748314 systemd-logind[1301]: New session 20 of user core. Dec 13 07:39:54.749624 systemd[1]: Started session-20.scope. Dec 13 07:39:54.758000 audit[5224]: USER_START pid=5224 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:54.768941 kernel: audit: type=1105 audit(1734075594.758:504): pid=5224 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:54.760000 audit[5227]: CRED_ACQ pid=5227 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:54.775980 kernel: audit: type=1103 audit(1734075594.760:505): pid=5227 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:58.027973 sshd[5224]: pam_unix(sshd:session): session closed for user core Dec 13 07:39:58.026000 audit[5244]: NETFILTER_CFG table=filter:114 family=2 entries=20 op=nft_register_rule pid=5244 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 07:39:58.055377 kernel: audit: type=1325 audit(1734075598.026:506): table=filter:114 family=2 entries=20 op=nft_register_rule pid=5244 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 07:39:58.055557 kernel: audit: type=1300 audit(1734075598.026:506): arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7fffea7a37a0 a2=0 a3=7fffea7a378c items=0 ppid=2405 pid=5244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:39:58.056938 kernel: audit: type=1327 audit(1734075598.026:506): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 07:39:58.026000 audit[5244]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7fffea7a37a0 a2=0 a3=7fffea7a378c items=0 ppid=2405 pid=5244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:39:58.026000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 07:39:58.044000 audit[5244]: NETFILTER_CFG table=nat:115 family=2 entries=22 op=nft_register_rule pid=5244 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 07:39:58.044000 audit[5244]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7fffea7a37a0 a2=0 a3=0 items=0 ppid=2405 pid=5244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:39:58.044000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 07:39:58.060000 audit[5224]: USER_END pid=5224 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:58.060000 audit[5224]: CRED_DISP pid=5224 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:58.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.230.78.246:22-139.178.89.65:39326 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:39:58.063999 systemd[1]: sshd@19-10.230.78.246:22-139.178.89.65:39326.service: Deactivated successfully. Dec 13 07:39:58.065632 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 07:39:58.069636 systemd-logind[1301]: Session 20 logged out. Waiting for processes to exit. Dec 13 07:39:58.077076 systemd-logind[1301]: Removed session 20. Dec 13 07:39:58.073000 audit[5248]: NETFILTER_CFG table=filter:116 family=2 entries=32 op=nft_register_rule pid=5248 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 07:39:58.073000 audit[5248]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffe80bdee00 a2=0 a3=7ffe80bdedec items=0 ppid=2405 pid=5248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:39:58.073000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 07:39:58.081000 audit[5248]: NETFILTER_CFG table=nat:117 family=2 entries=22 op=nft_register_rule pid=5248 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 07:39:58.081000 audit[5248]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffe80bdee00 a2=0 a3=0 items=0 ppid=2405 pid=5248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:39:58.081000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 07:39:58.146725 systemd[1]: Started sshd@20-10.230.78.246:22-139.178.89.65:39338.service. Dec 13 07:39:58.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.230.78.246:22-139.178.89.65:39338 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:39:59.063000 audit[5249]: USER_ACCT pid=5249 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:59.065000 audit[5249]: CRED_ACQ pid=5249 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:59.065000 audit[5249]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc1cc13a10 a2=3 a3=0 items=0 ppid=1 pid=5249 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:39:59.065000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 07:39:59.067298 sshd[5249]: Accepted publickey for core from 139.178.89.65 port 39338 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 07:39:59.069797 sshd[5249]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 07:39:59.078672 systemd[1]: Started session-21.scope. Dec 13 07:39:59.079226 systemd-logind[1301]: New session 21 of user core. Dec 13 07:39:59.086000 audit[5249]: USER_START pid=5249 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:39:59.089000 audit[5258]: CRED_ACQ pid=5258 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:40:00.698392 sshd[5249]: pam_unix(sshd:session): session closed for user core Dec 13 07:40:00.700000 audit[5249]: USER_END pid=5249 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:40:00.706549 kernel: kauditd_printk_skb: 20 callbacks suppressed Dec 13 07:40:00.709275 kernel: audit: type=1106 audit(1734075600.700:519): pid=5249 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:40:00.713785 systemd[1]: sshd@20-10.230.78.246:22-139.178.89.65:39338.service: Deactivated successfully. Dec 13 07:40:00.716100 systemd-logind[1301]: Session 21 logged out. Waiting for processes to exit. Dec 13 07:40:00.716101 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 07:40:00.718862 systemd-logind[1301]: Removed session 21. Dec 13 07:40:00.700000 audit[5249]: CRED_DISP pid=5249 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:40:00.729927 kernel: audit: type=1104 audit(1734075600.700:520): pid=5249 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:40:00.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.230.78.246:22-139.178.89.65:39338 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:40:00.737961 kernel: audit: type=1131 audit(1734075600.713:521): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.230.78.246:22-139.178.89.65:39338 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:40:00.843495 systemd[1]: Started sshd@21-10.230.78.246:22-139.178.89.65:48600.service. Dec 13 07:40:00.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.230.78.246:22-139.178.89.65:48600 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:40:00.850912 kernel: audit: type=1130 audit(1734075600.843:522): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.230.78.246:22-139.178.89.65:48600 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:40:01.768000 audit[5267]: USER_ACCT pid=5267 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:40:01.775231 sshd[5267]: Accepted publickey for core from 139.178.89.65 port 48600 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 07:40:01.779916 kernel: audit: type=1101 audit(1734075601.768:523): pid=5267 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:40:01.781000 audit[5267]: CRED_ACQ pid=5267 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:40:01.793576 kernel: audit: type=1103 audit(1734075601.781:524): pid=5267 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:40:01.793757 kernel: audit: type=1006 audit(1734075601.781:525): pid=5267 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Dec 13 07:40:01.794073 kernel: audit: type=1300 audit(1734075601.781:525): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd61d74b90 a2=3 a3=0 items=0 ppid=1 pid=5267 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:40:01.781000 audit[5267]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd61d74b90 a2=3 a3=0 items=0 ppid=1 pid=5267 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:40:01.795156 sshd[5267]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 07:40:01.800968 kernel: audit: type=1327 audit(1734075601.781:525): proctitle=737368643A20636F7265205B707269765D Dec 13 07:40:01.781000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 07:40:01.809336 systemd-logind[1301]: New session 22 of user core. Dec 13 07:40:01.811056 systemd[1]: Started session-22.scope. Dec 13 07:40:01.823000 audit[5267]: USER_START pid=5267 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:40:01.832736 kernel: audit: type=1105 audit(1734075601.823:526): pid=5267 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:40:01.831000 audit[5270]: CRED_ACQ pid=5270 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:40:02.583069 sshd[5267]: pam_unix(sshd:session): session closed for user core Dec 13 07:40:02.584000 audit[5267]: USER_END pid=5267 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:40:02.584000 audit[5267]: CRED_DISP pid=5267 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:40:02.587355 systemd[1]: sshd@21-10.230.78.246:22-139.178.89.65:48600.service: Deactivated successfully. Dec 13 07:40:02.586000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.230.78.246:22-139.178.89.65:48600 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:40:02.588687 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 07:40:02.590252 systemd-logind[1301]: Session 22 logged out. Waiting for processes to exit. Dec 13 07:40:02.591681 systemd-logind[1301]: Removed session 22. Dec 13 07:40:06.828000 audit[5280]: NETFILTER_CFG table=filter:118 family=2 entries=20 op=nft_register_rule pid=5280 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 07:40:06.839069 kernel: kauditd_printk_skb: 4 callbacks suppressed Dec 13 07:40:06.839906 kernel: audit: type=1325 audit(1734075606.828:531): table=filter:118 family=2 entries=20 op=nft_register_rule pid=5280 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 07:40:06.839994 kernel: audit: type=1300 audit(1734075606.828:531): arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffc6d934e70 a2=0 a3=7ffc6d934e5c items=0 ppid=2405 pid=5280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:40:06.828000 audit[5280]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffc6d934e70 a2=0 a3=7ffc6d934e5c items=0 ppid=2405 pid=5280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:40:06.828000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 07:40:06.850314 kernel: audit: type=1327 audit(1734075606.828:531): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 07:40:06.850000 audit[5280]: NETFILTER_CFG table=nat:119 family=2 entries=106 op=nft_register_chain pid=5280 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 07:40:06.850000 audit[5280]: SYSCALL arch=c000003e syscall=46 success=yes exit=49452 a0=3 a1=7ffc6d934e70 a2=0 a3=7ffc6d934e5c items=0 ppid=2405 pid=5280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:40:06.863515 kernel: audit: type=1325 audit(1734075606.850:532): table=nat:119 family=2 entries=106 op=nft_register_chain pid=5280 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 07:40:06.863654 kernel: audit: type=1300 audit(1734075606.850:532): arch=c000003e syscall=46 success=yes exit=49452 a0=3 a1=7ffc6d934e70 a2=0 a3=7ffc6d934e5c items=0 ppid=2405 pid=5280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:40:06.863780 kernel: audit: type=1327 audit(1734075606.850:532): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 07:40:06.850000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 07:40:07.729547 systemd[1]: Started sshd@22-10.230.78.246:22-139.178.89.65:48616.service. Dec 13 07:40:07.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.230.78.246:22-139.178.89.65:48616 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:40:07.737201 kernel: audit: type=1130 audit(1734075607.730:533): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.230.78.246:22-139.178.89.65:48616 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:40:08.638000 sshd[5282]: Accepted publickey for core from 139.178.89.65 port 48616 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 07:40:08.636000 audit[5282]: USER_ACCT pid=5282 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:40:08.649321 kernel: audit: type=1101 audit(1734075608.636:534): pid=5282 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:40:08.655865 kernel: audit: type=1103 audit(1734075608.648:535): pid=5282 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:40:08.648000 audit[5282]: CRED_ACQ pid=5282 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:40:08.650723 sshd[5282]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 07:40:08.661032 kernel: audit: type=1006 audit(1734075608.648:536): pid=5282 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Dec 13 07:40:08.648000 audit[5282]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffec29f4460 a2=3 a3=0 items=0 ppid=1 pid=5282 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:40:08.648000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 07:40:08.668733 systemd[1]: Started session-23.scope. Dec 13 07:40:08.669112 systemd-logind[1301]: New session 23 of user core. Dec 13 07:40:08.677000 audit[5282]: USER_START pid=5282 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:40:08.680000 audit[5287]: CRED_ACQ pid=5287 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:40:09.366607 sshd[5282]: pam_unix(sshd:session): session closed for user core Dec 13 07:40:09.367000 audit[5282]: USER_END pid=5282 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:40:09.367000 audit[5282]: CRED_DISP pid=5282 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:40:09.370000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.230.78.246:22-139.178.89.65:48616 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:40:09.370908 systemd[1]: sshd@22-10.230.78.246:22-139.178.89.65:48616.service: Deactivated successfully. Dec 13 07:40:09.372247 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 07:40:09.373793 systemd-logind[1301]: Session 23 logged out. Waiting for processes to exit. Dec 13 07:40:09.375511 systemd-logind[1301]: Removed session 23. Dec 13 07:40:14.513487 systemd[1]: Started sshd@23-10.230.78.246:22-139.178.89.65:57254.service. Dec 13 07:40:14.523143 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 13 07:40:14.523797 kernel: audit: type=1130 audit(1734075614.513:542): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.230.78.246:22-139.178.89.65:57254 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:40:14.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.230.78.246:22-139.178.89.65:57254 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:40:15.435000 audit[5334]: USER_ACCT pid=5334 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:40:15.436956 sshd[5334]: Accepted publickey for core from 139.178.89.65 port 57254 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 07:40:15.442919 kernel: audit: type=1101 audit(1734075615.435:543): pid=5334 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:40:15.441000 audit[5334]: CRED_ACQ pid=5334 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:40:15.448925 sshd[5334]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 07:40:15.457501 kernel: audit: type=1103 audit(1734075615.441:544): pid=5334 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:40:15.457597 kernel: audit: type=1006 audit(1734075615.442:545): pid=5334 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Dec 13 07:40:15.466840 kernel: audit: type=1300 audit(1734075615.442:545): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc7f066dd0 a2=3 a3=0 items=0 ppid=1 pid=5334 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:40:15.466968 kernel: audit: type=1327 audit(1734075615.442:545): proctitle=737368643A20636F7265205B707269765D Dec 13 07:40:15.442000 audit[5334]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc7f066dd0 a2=3 a3=0 items=0 ppid=1 pid=5334 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:40:15.442000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 07:40:15.472511 systemd-logind[1301]: New session 24 of user core. Dec 13 07:40:15.475073 systemd[1]: Started session-24.scope. Dec 13 07:40:15.482000 audit[5334]: USER_START pid=5334 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:40:15.491974 kernel: audit: type=1105 audit(1734075615.482:546): pid=5334 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:40:15.492000 audit[5337]: CRED_ACQ pid=5337 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:40:15.513241 kernel: audit: type=1103 audit(1734075615.492:547): pid=5337 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:40:16.262937 sshd[5334]: pam_unix(sshd:session): session closed for user core Dec 13 07:40:16.264000 audit[5334]: USER_END pid=5334 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:40:16.277993 kernel: audit: type=1106 audit(1734075616.264:548): pid=5334 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:40:16.283577 kernel: audit: type=1104 audit(1734075616.264:549): pid=5334 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:40:16.264000 audit[5334]: CRED_DISP pid=5334 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:40:16.276073 systemd-logind[1301]: Session 24 logged out. Waiting for processes to exit. Dec 13 07:40:16.279236 systemd[1]: sshd@23-10.230.78.246:22-139.178.89.65:57254.service: Deactivated successfully. Dec 13 07:40:16.280787 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 07:40:16.282916 systemd-logind[1301]: Removed session 24. Dec 13 07:40:16.278000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.230.78.246:22-139.178.89.65:57254 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:40:20.694195 systemd[1]: run-containerd-runc-k8s.io-cca6b830759fb782ef48ee50ead9e2ca1809a7438bf018251a99625cfa9b7924-runc.Om55d1.mount: Deactivated successfully. Dec 13 07:40:21.409139 systemd[1]: Started sshd@24-10.230.78.246:22-139.178.89.65:60892.service. Dec 13 07:40:21.415659 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 07:40:21.416759 kernel: audit: type=1130 audit(1734075621.409:551): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.230.78.246:22-139.178.89.65:60892 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:40:21.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.230.78.246:22-139.178.89.65:60892 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:40:22.331010 sshd[5366]: Accepted publickey for core from 139.178.89.65 port 60892 ssh2: RSA SHA256:dQnQ6z9Pj/RNX8sNR4TqdGn8nHqynNIoEP6sXMH78jY Dec 13 07:40:22.329000 audit[5366]: USER_ACCT pid=5366 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:40:22.345392 kernel: audit: type=1101 audit(1734075622.329:552): pid=5366 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:40:22.337000 audit[5366]: CRED_ACQ pid=5366 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:40:22.360869 kernel: audit: type=1103 audit(1734075622.337:553): pid=5366 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:40:22.361157 kernel: audit: type=1006 audit(1734075622.337:554): pid=5366 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Dec 13 07:40:22.361218 kernel: audit: type=1300 audit(1734075622.337:554): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff0a170d00 a2=3 a3=0 items=0 ppid=1 pid=5366 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:40:22.337000 audit[5366]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff0a170d00 a2=3 a3=0 items=0 ppid=1 pid=5366 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 07:40:22.337000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 07:40:22.374284 sshd[5366]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 07:40:22.377048 kernel: audit: type=1327 audit(1734075622.337:554): proctitle=737368643A20636F7265205B707269765D Dec 13 07:40:22.390104 systemd[1]: Started session-25.scope. Dec 13 07:40:22.391862 systemd-logind[1301]: New session 25 of user core. Dec 13 07:40:22.406000 audit[5366]: USER_START pid=5366 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:40:22.416034 kernel: audit: type=1105 audit(1734075622.406:555): pid=5366 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:40:22.416661 kernel: audit: type=1103 audit(1734075622.414:556): pid=5369 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:40:22.414000 audit[5369]: CRED_ACQ pid=5369 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:40:23.090037 sshd[5366]: pam_unix(sshd:session): session closed for user core Dec 13 07:40:23.091000 audit[5366]: USER_END pid=5366 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:40:23.103918 kernel: audit: type=1106 audit(1734075623.091:557): pid=5366 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:40:23.104249 systemd[1]: sshd@24-10.230.78.246:22-139.178.89.65:60892.service: Deactivated successfully. Dec 13 07:40:23.091000 audit[5366]: CRED_DISP pid=5366 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:40:23.106048 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 07:40:23.107386 systemd-logind[1301]: Session 25 logged out. Waiting for processes to exit. Dec 13 07:40:23.113147 kernel: audit: type=1104 audit(1734075623.091:558): pid=5366 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.89.65 addr=139.178.89.65 terminal=ssh res=success' Dec 13 07:40:23.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.230.78.246:22-139.178.89.65:60892 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 07:40:23.113732 systemd-logind[1301]: Removed session 25.